 Hello, everyone, and welcome to our next EDW session called Meeting the Data Related Challenges of Cloud Migration, which will be presented by Danny Sandwell, the Product Marketing Director at Irwin by Quest. All audience members are muted during these sessions, so please submit your questions in the Q&A window on the right-hand side of your screen. And our speaker will respond to as many questions as possible at the end of the talk. Please note that there is a linked form at the bottom of the page titled EDW Conference Session Survey. This is where you can submit your feedback, and we encourage you to do so. So let's begin our presentation now. Thank you, and welcome, Danny. Hey, thanks, Shannon. Thanks, everyone, for taking the time to join us today. Hopefully, this is a very relevant topic for you. I know we are finding that it is very relevant with all of our customers out there. For those of you that don't know me, there's a picture of me back in the days when we actually used to have face-to-face contact. I guess I'm a little bit just trying to reminisce. But my whole career has been about helping organizations get more out of their data, moving it from something that's the natural byproduct of running their business to something that's actually driving their business to greater value, so really unlocking the value of data assets and mitigating those data-related risks. For those of you not familiar with Irwin, we've grown from our data modeling routes to put together a combination of technologies and solutions that will really help you harness your data, get the most out of it, and target it appropriately. For all the drivers that you have, whether it's digital transformation, regulatory compliance, business continuity, whatever it is, you can leverage our technology to bring the right people together with the right capabilities to get the most out of your data and again mitigate the risks that may be associated with using it in these new and wonderful ways. We do have some pedigree, so beyond our many years of leading the data modeling world, we've turned that into a strong capability in metadata management, data governance, and data intelligence. Couple of years ago, we debuted in the Magic Quadrant and this year we were the only ones to move up and across, so hopefully that's in the right direction, but well-recognized and again customers across all sorts of verticals that are definitely feeling the benefits of leveraging our capabilities to solve their problems. So, the topic is migrating to the cloud. There's lots of different topics that we could talk about, but this one seems to be very, very appropriate for this day and age, between the pandemic, between the speed of business, the competitive and dynamic world that we all do business in, the cloud is becoming sort of not a when, or not a if, but a when. And with the advent of all these very rich cloud data management platforms that are out there, organizations are moving from specific applications like a sales force or something like that and really putting their core capabilities in the cloud so they can get that performance, that scalability, a lower total cost of ownership if it's done right, and really leverage these capabilities in a way that not just gets them to the answers that they need faster, but also has a level of security and availability behind it that organizations need in these strange and wonderful times in which we live. But what we find and what a lot of our customers are finding is that it's a great idea and if you're starting with a green field, moving to the cloud seems like a natural thing. But if you have value and you have legacy and you have capabilities in your organization that you don't want to just throw away and restart from the beginning, there's some real challenges to realizing these modernization benefits that the cloud can deliver and breaking it down into two major categories. It's taking what they have today that is delivering value potentially in an on-premise situation and getting it over into the cloud in a way that's effective, that's accurate, that doesn't take far too much time and doesn't have a huge cost behind it to actually take away from the value that you're trying to get. And then of course, once that's out there, making sure that the data that's there is just as well-governed and governed on day one and is just as accessible and understandable to all of the different stakeholders that you have in your organization so that you can truly democratize your data and get the biggest benefit from that data by having all of these people and organizations working together with real visibility, control and enabling collaboration to get much more benefit from the data when it gets there. So, I always like to start with this idea of what data intelligence is because data intelligence is where you practice data governance, it's where you automate key elements of your data management processes to make sure all of those things are in sync, they're all visible and they're all well understood. So if you think about the typical complex world that we all live in from a data perspective, represented in the lower left of all of these different technologies, all of these different regulations, all of these different use cases, all of these different challenges and making sense of it so that no matter who you are and how you're approaching data, whether you're an end user, whether you're a DBA or somewhere in between, you can navigate that with full visibility, do that in a manner and a context that's comfortable for you and allow you to really be part of the solution. So we do that in data intelligence by harvesting your entire physical world, everything from your data sources, how data moves through that, the processes that integrates that data, right to the consumption endpoints, put that into a place where it can be curated with all sorts of important contexts and richer metrics around how it relates to your technology architecture as well as your business architecture, ensure that you put the workflows in place and all of the information so that you know who's accountable, who can help you with that data, what rules and policies apply, what type of regulatory compliance it may be associated, and then taking that framework and activating it to get benefit out of it to automate key aspects that are very challenging, things like data lineage, things like impact analysis, things like migrating from one platform to another as we're gonna talk about today. And then through that entire process, have an access point, a social media app, if you will, that socializes all of this information and allows people to get into that, build communities, have fun, input their tribal knowledge for the betterment of all and really become much more effective in using data for their business. And getting more utility from your metadata is a key because the metadata is there, you're not paying for it, you have it, and it's really about putting it in the right situation for it to do much more than it's doing for you today. So whether that's auto documenting and being able to connect in and bring that in a meaningful way across the entire data life cycle, whether it's helping and driving greater curation and associating things that aren't necessarily easy to associate out of the gate, things like your different policies, your different rules, sensitive data classification, and then enabling you to provide an environment where people can get on demand things like end-to-end lineage from any point within that journey, impact analysis, graph type visualizations that allow you to navigate that through that intelligently, and then really drill down through the use of things like dashboards into specific elements of the data to get rid of the noise and then really get to the insights that you need. And of course, automating your pipelines, your workloads, and really auto generating the changes that are gonna come in this agile environment and make sure that you can orchestrate that platform without a huge uptick in manual innovation, which again drives up costs and reduces your speed and agility. So very, very important, especially in this topic, I'm looking forward. So as we look at moving to the cloud, again, if you're a net new and you don't have a data warehouse, if you don't have a data lake, you're better off than most of us, but most organizations have a lot of value and a lot of investment in those places. So they need a way to lift that and move it over into this new environment quickly, efficiently, effectively, and accurately. And we've put together a bunch of capabilities that exist across our technologies that will actually help you do that very thing. So whether it's converting the data structures that you may have in your data warehouse or in your Hadoop lake that's on premise and pointing them to these modern technologies, the snowflakes, the Azure Synapse, all of those things, helping you actually take the data and reload it into that environment quickly and efficiently, you may be taking the opportunity to transform data movement in order to make sure that you have a richer base of data that hasn't been transformed until it gets to the point of a use case. And that way you have that data available for whatever use case, even if you haven't envisioned it today. And then realign those usage models that exist out in technologies that may be using things like cubes and that on premise and taking advantage of these new environments by realigning those usage model and then scripting all of that together to automate ongoing DevOps in these environments with the real true benefit of having continuous data governance not just what you've created on this new platform but through the entire process so that you can actually have traceability, visibility and auditability of how you got from point A to point B built into the actual process of moving those things. So let's dig into it a little bit more. So it starts with the data model. I'm an old data modeler, I've been with everyone for many, many years. And the nature of our tool being a heterogeneous tool that supports all sorts of database platforms whether it's your traditional Oracle SQL servers, your MySQLs, out to your cloud databases like Snowflake, Azure, MariaDB, NoSQL databases like Mongo and Couch, any of those things, we would give you the capability within our tool to reverse engineer what you have, take that model, leverage our capabilities to transform it, point it at this new technology and then redeploy that out through our forward engineering capability. So lots of capabilities in the tool not just to move the schema from one target to another but to specify how you want data types to transform across that migration, making sure that you're maintaining your naming standards that you've created to standardize your environment and leveraging all of that standardization and reuse that exists in the data modeling environment that you're using every day to deploy new data sources to the business. So it starts with that and then we from there you can then move into the data intelligence capability which is a combination of a data catalog and a data literacy suite to quickly map what you have in one schema in one environment to the other. And the purpose of this is to accelerate the reloading of data to this new environment. We don't come with a data movement engine but we let you leverage whatever data movement engine you have and really you start to leverage our abstracted mapping documents to quickly map those things, automate the transformations and then regenerate a job that can be quickly run out in your native ETL tools and push that data across leveraging all of the technologies that exist in that cloud data platform to make it as efficient and effective as possible. So from there, once you've got that data loaded you really need to start thinking about are the tools that I have in place going to do the job? So are we just repointing existing data movement processes or are we actually going to go and say leverage some of the modern capabilities, modern languages, modern scripting that exists on these platforms again so that you can do a better and more efficient job and have more of that core foundation of your data sitting on that platform for true effectiveness. So we have a complexity assessment that goes through and looks at what you have from many different angles and slices and dices so that you can see how complex is your environment? How frequently do you use similar components? What kind of design patterns do you have in those data movement processes? How good a fit is this new technology for the capabilities that you have today and the capabilities that you want moving forward? And then we'll enable you to really provide a key timeline in terms of how much automation you can take advantage of, what the sort of manual touch ups and interventions that will be required based on your situation and your architecture to make sure that everybody is clear in terms of what you can achieve, when you can achieve it and set their expectations appropriately. So very, very powerful in understanding and automating the lion's share and we've seen benefits of 70, 80 and 90% of proposed timelines under a traditional approach where we can do it that much faster with a high degree of accuracy and again traceability throughout the entire process. And then you move now, you've loaded the data, you've assessed and you can make the choices. Am I just going to repoint my data movement and migrate those jobs to that new environment on the cloud? Or am I going to take full advantage of that where I'm actually going to convert my data movement technologies to take advantage of that? So again, we've reversed engineered them into these abstracted logical mapping documents and we can point that through our technology to any of the technologies, whether it's legacy or the newest and the greatest and the latest that's out there and recreate that architecture again with a high degree of accuracy and integrity with very, very little risk and letting you take advantage more quickly of all of those capabilities that you were looking for when you went to move to the cloud in the first place. But it doesn't just stop at data movement. You have a lot of technologies that are out there, a lot of them leveraging things on premise like multi-dimensional cubes where you've got different technologies that are preparing data for specific uses. But the cloud has lots of capabilities and lots of efficiency built in but they are slightly different. So again, using our smart connectors, we can reverse engineer what you have in terms of business intelligence, reporting hubs, all of those different technologies that are delivering information to the business. And we will show you how to convert again with all sorts of integrity in an automated fashion to a tabular format from these multi-dimensional cubes that are more appropriate and can leverage the power of the cloud to give you the results that you want today but also give you much more flexibility as you bring more use cases online, leveraging things like AI, ML and all the good work that the data scientists that are doing out there. So moving those attributes, measures and relationships into an environment that's much more flexible, much more agile and purpose built for your future with data and how you want to leverage it moving forward. And once you're there, because you've done all of this in the data intelligence environment, you've now got a clear view of what your as is was, right? If that makes sense. And what your to be looks like. And it puts it into this environment where we can then start associating all of the important things to those technical assets like business terminology that makes it easier to understand and makes people more literate in terms of the data that you have, applying the policies and procedures, bringing along the tagging for sensitive data, whether it's sensitive data by category or the specific regulatory schemes that you have in place or need to have in place to make sure that you're in compliance, not at risk of being fined or even worse become the next poster boy for bad data through a very public breach or impact on the privacy of your customers data. So this goes right down to making sure as you transform your business and as you bring on these new capabilities, you're not at risk of having your reputation and your relationship with your customers impacted by that because again, you only get one kick at the can, the only companies that really get remembered around data in the news or the ones that had a bad situation and did it wrong. While you're in there, now you have lineage and again, you have lineage of what it used to look like and what it looks like today. Within our solution, this lineage is on demand from any point. So this is not something that you have to create and maintain. This is something that is presented to you based on the requests that you make. So lineage from where do you want forward? Do you want back? How much detail do you want? Do you need to see the transformations? All of this can be filtered out so that you can get lineage views for different people that have different needs. You can get rid of a lot of the noise and you can have a lot of rich information there on screen for them to see. So very, very powerful in terms of really understanding especially for people that are used to using things in the old way and now are in this new environment, you have to make sure that they have a high degree of trust in terms of the nature and pedigree of the data that you're delivering and then you can do that on day one because again, this is all a byproduct of what you've captured and that mechanism that you've used to automate this move to this new capability and technology. Leveraging things like those dashboards so that you can see the classification of data making sure that that classification follows along the entire journey, leveraging things like impact and analysis and lineage to actually propagate those classifications so that you don't have to go to every step along the journey and make sure that you're doing things correctly and consistently. Let the tool do that for you and then give you a view into that. And again, this view can be for sensitive data in general. You can have a view that looks at DR or CCPA compliance. If you're in the healthcare world, you can drill down and provide views that are meaningful to your line of business, to the world that you live in and to the problems that you want to solve for your organization. So, I know I've been kind of very quick in terms of moving you through but this isn't in perspective and it is a half an hour. Hopefully this kind of gave you a flavor of the types of things that you can do using data intelligence. In fact, the combination of this we're calling something, we're calling it cloud catalyst. If you visit us at erwin.com, we'd love to have a conversation to see where you are on that journey, what the challenges are that you're facing, whether you're moving to the cloud, whether you're just trying to modernize your on-prem architecture with new capabilities and new solutions that give you more of what you need. We can help you in all of those environments no matter how hybrid your environment is. So, go take a look at cloud catalyst. Please reach out to us and please come and visit us. We have a booth here at EDW with lots of people that are much smarter than me that can drill you down into the details of how these things happen and how we can really help you with your migration challenges, your data governance challenges and really create a richer data fabric in your organization to meet the needs that you have today, tomorrow and the day after that. With that, I think I may have left a couple of minutes for questions. So Shannon, if you wanna fire at me, I'm happy to oblige. Danny, thank you so much for this great presentation. And if you do have questions for Danny, feel free to submit them in the Q and A in the right hand side of your screen. I do see a question queued up here. How is source to target mapping done automatically? So we have a number of ways of doing it. So a lot of organizations have been doing source to target mapping. Have they done it in many ways? I would say the most common way that I've seen it done is in that old faithful standby, the Excel spreadsheet. So in our data intelligence suite in the data catalog, we have a mapping manager and you really have a number of ways to create mappings. You can create them manually using the metadata that's in that underlying metadata repository. So it's a very easy drag and drop. There's lots of sort of accelerators in that where it will do intelligent mapping for you. If you have mappings that you've invested in into spreadsheet, but you're not getting the utility out of them, they're tough to maintain. They're never up to date. And if you're always pointing people to the wrong mapping document, you can automatically import them using our APIs and the technology and turn those spreadsheets into discrete mapping documents that you can then put under lifecycle control and start to manipulate moving forward. And then again, using our smart data connectors, we can actually go into any technology that you have, whether it's procedural code in the database, whether it's ETL tools or business intelligence tools and reporting type tools and reverse engineer those mappings from the code and create again, these abstracted documents that you can then point and forward engineer code from as well using smart connectors. And this is at the core of how you leverage that logical data movement model to really provide you the agility that you need to modernize your data architecture and your data movement processes behind it. Well, Danny, that is all the questions I see coming in right now. Anything else you wanna add? No, just if you did notice, we were acquired by Quest at the beginning of this calendar year after a very successful year last year. So for those of you that aren't familiar with Quest, an excellent company that has a lot of capabilities from identity management, platform support, as well as deep capabilities around the data, data intelligence, data governance and what Erwin brings to the table is very important for them. And we should be seeing some pretty exciting notifications and press releases out in terms of what this marriage of Erwin and Quest is going to mean for our customers moving forward. It's just gonna accelerate our ability to deliver in this domain and also provide you with linkage down into some of the natural places that would make sense to really make it even more powerful in terms of your ability to take advantage of that data. So please visit us at the Erwin booth. Like I said, we've got people there that can answer all of your questions and just thank you so much for taking the time to spend it with us, enjoy the conference. Hey, Danny, we did have one quick question that popped up while you were chatting. I just wanna sneak it in since we have a couple minutes left. Does Erwin Cloud Catalysts have capability to convert existing third or star schema models to data vault 2.0 model? Absolutely, it does. Thank you for reminding me of that. That's a very important one for us. We have data vault modeling in our data modeling solution, but more importantly, well, not more importantly, but equally as important, we have a full capability to automate your data vault environment. So not just deploy and migrate the schema over but put a full orchestration capability around that data vault because the whole purpose of data vault is to give you a much more agile data warehouse that you can bring the changes in that are necessary without having to basically lift the whole thing up and change it every single time. So a lot of capability around data vault both in terms of design, but also in full-on automation. So strongly connected with Mr. Linstead and all of those folks and it's a big focus of what we do and how we help our customers get to that that new data capability that they crave. And one last question here that we'll sneak in, what version of Erwin is required for cloud migration? So in that cloud catalyst has a number of products in it. So, you know, there's, from a data modeler perspective, you absolutely can do this with the existing version today, which is DM 2020 R2. And then it's Erwin Data Intelligence 10.2. Both of those are coming out with more rich features. And again, you should see some announcements over the next month or so as well. But this is all capable in the tools today and will be even more capable with the next releases that we have going forward. Well, Danny, thank you so much for this great presentation. We just want to note again that there is a link form at the bottom of the page titled EW Conference Sessions Survey. This is where you can submit feedback from today's session. And that wraps us up. You are welcome to continue networking with other attendees within the SpotMe app as we take a quick break between sessions and get ready to start our keynote here in just a few minutes. We look forward to seeing you then. Danny, thank you so much. Thanks for the opportunity. Have a great day, everybody.