 We go. Hello and welcome. My name is Shannon Kemp and I'm the Chief Digital Manager of Data Diversity. We would like to thank you for joining this Data Diversity webinar, Accelerate and Assure the Adoption of Cloud Data Platforms Using Intelligent Data Automation, sponsored today by Irwin. Just a couple of points to get us started. Due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, we'll be collecting them via the Q&A in the bottom right hand corner of your screen. Or if you'd like to tweet, we encourage you to share highlights or questions via Twitter using hashtag Data Diversity. And if you'd like to chat with us through with each other, we certainly encourage you to do so. Just click the chat icon in the bottom right hand corner of your screen for that feature. And as always, we will send a follow-up email within two business days containing links to the slides, the recording of the session, and additional information requested throughout the webinar. Now let me introduce to you our speaker for today, Danny Sandwell. Danny is an IT industry veteran with more than 30 years of experience. As Director and Product Marketing for Irwin, he is responsible for communicating the technical capabilities and business value of the company's data modeling and data intelligence solutions. During Danny's 20-plus years with the company, he also has worked in pre-sales consulting, product management, business development, and business strategy roles, all giving him opportunities to engage with customers across various industries as they plan, develop, and manage their data architectures. His goal is to help enterprises unlock the value of their data assets to produce the desired results while mitigating data-related risks. And with that, I will turn the floor over to Danny to get today's webinar started. Hello and welcome. Hey, thank you, Shannon, and thank you everybody for taking the time to join us. Every time I listen to that bio, it all sounds very impressive, but at the end of the day, I'm really just a data guy. And I have a feeling that a few of the people on this line might be data people as well. So let's get started. Really, what we're looking at here today is sort of a unique capability that we've built up over time based on the needs of our customer base and really a very, very powerful capability in terms of allowing you the agility and the capability to move to the platforms that are going to best serve your business moving forward in these, what I think I'm fairly safe to say, are some very strange, challenging, unique, but in the same time, exciting times. So with that, let's get started. So since this is sponsored by Irwin and if you're not familiar with us as an organization, just a quick look in terms of what we do and what we intend to bring to the market, what we've done is brought together a set of solutions that give you all the capability and tool sets required to really understand your enterprise at all levels of detail. That starts with our Irwin Evolve suite, which looks at enterprise architecture, business process, and innovation management. Really understanding what is the state of the enterprise today, both from a business and a technical perspective, really looking at what are the goals and desires of your enterprise, the strategies to fulfill those, and then be able to leverage that and put that into a clear view of the capabilities, technologies, infrastructures, and data that you have to really plan, analyze, design a way forward, and then manage that process in a collaborative fashion with great visibility and understanding in terms of where we are today, where we want to get to, and then who's responsible for what and where we are on that journey so that everybody can be fully informed, set expectations appropriately, and be prepared to leverage transformation and innovation on day one. Our flagship data modeling product, which is really what, it's our legacy, it's our pedigree, the leader that we are in delivering technology that allows you to get the most out of your data sources, whether you're designing new ones, integrating them, understanding what's in them, or trying to communicate to the larger organization what data is available to them, and how they can access it. And then at our foundation now is our data intelligence suite, which is a combination of a catalog as well as a data literacy suite that allows you to look at all of your technical data assets, document those in the form of metadata, provide all of the insights around that, and then put a strong governance and intelligence framework around that in the form of a business glossary manager, some AI and machine learning capabilities workflow, and a business user portal so that you can bring that context to your technical data assets, you can start to apply policy rules, all of those good things so that people know how to use the data, how not to use the data, what data is sensitive, what are the sort of nuances around that data and really become much more literate to become effective in delivering that. And through this capability, and our data connectors, our standard data connectors, which are for all of the data sources that you have across the organization, and then our smart data connectors, which will be a big focus of what we're talking about today, is the ability to take that data catalog and business glossary and really activate all of the metadata that's in there, and use that activation to automate tasks that will again allow you to do things better, faster, cheaper, with much less risk of missing the mark and not delivering what the organization wants you to deliver when it comes to data across the organization. So that's what we do, and this again is something that we're seeing a lot of organizations very, very interested in as they are taking that journey towards being a data-driven enterprise, because they understand that there's a lot of moving pieces there, especially organizations that have a legacy of data management and are looking to take that and really provide that agility to the business that's required without the risks that come along with all that data can represent for an organization in today's world. So the title, Assurance Accelerate, the adoption of a cloud data platform, really when you look at what I'm talking about here, it's specifically focused on some of the great platforms that are out there in terms of cloud-hosted data capability, but if your modernization needs don't include the cloud for whatever reason, if you're not, organization's not ready to get there, or that there's barriers towards that adoption, if you're modernizing on premise, all of this still applies, and this capability and the value behind it is going to be very, very useful for you. So whether you're moving from your on premise out to something like a snowflake or an Azure data factory, this is very appropriate, but if you're taking some of your legacy capabilities and technologies and trying to modernize that in-house, same value proposition applies. So why are people moving towards the cloud or looking for data platform modernization? Well, it's not a mystery, digital transformation, the ability to take businesses that have been around for a long time in a classic brick-and-mortar, in-person type of business model, and find new ways to market, new ways to new customers, better ways to deliver value and satisfaction to those customers, and really elevate their ability to compete in the markets that they choose to compete in. And a lot of this is through data-driven innovations. So going out and finding new ways to do things based on what the data in your organization and around your organization is telling you in terms of informed insights versus gut feelings or best guesses. As we've seen this year, 2020, a very interesting year and a lot of different levels. Business continuity continues to be a big driver and becomes more of a driver as we look at the types of things that could potentially interrupt our business or put our business at risk and making sure that we're set up to be successful no matter what the next thing may bring us. And then, of course, financial optimization, not just cutting costs, but making the best use of the money that you have, the resources that you have, targeting them at things that are going to really drive value in the business as opposed to keep the lights on and keep the hamsters running on the wheels in the back rooms. So these are all things that organizations, no matter what vertical they're in, no matter what business they're in, whether it's private sector, public sector, everybody is facing these challenges. Everybody has these goals and these things are driving them again to modernize their data capability, whether that's doing it on a cloud or bringing in a better mousetrap in-house on premise and really setting up the organization for success. So when we look at what we're coining and a lot of people are coining the data dilemma or the enterprise data dilemma, a lot of great research being done by Stuart Bond at IDC, and so I thought I'd bring some of this up because we see this every day anecdotally from customers and prospects that we're working with, but this is really a good global view in terms of the sheer amount of data and the rate of growth in terms of data. I don't think it's going to stop and I think that growth percentage is just going to continue to accelerate as organizations and businesses innovate, organizations that serve those businesses innovate and data becomes ubiquitous, it becomes much more available, different types of data that were never considered before are now being brought into the pot to be mixed up into that stew for the business. And when we look at organizations, this one is pretty powerful figure, 95% of organizations integrating up to six different types of data, again across 10 different types of technologies, which is no mean feat. And then when you take that to the next level, not just being able to store that data, manage it and make sure that it has integrity, but then allowing people in the organization that consume that data for business benefit to become better at using that and a better data citizen overall in the organization is no small task. And then of course, 94% of organizations were starting to integrate data across hybrid cloud environments with elements of on-prem, elements of different cloud approaches and cloud architectures, which again, while it can solve a lot of problems, it can bring a lot more complexity in, especially for an organization that has a strong legacy and a strong foundation in data management processes and infrastructure that already exist and need to move forward into this modern world. And data intelligence is really being targeted as the answer for a lot of problems, and we see that every single day. Data intelligence is really the ability to increase your capability, understanding and control of the data that you have in your organization with the goal of bringing together folks that are in data management, data development and operations, data governance, and then the business community that's consuming data and bring them together on a single pane of glass that will tell them the truth that they can consume and develop trust in the data that they're using, greater capability and the ability to use that data for benefit and really realize that dream that organizations have, which is to become a data-driven organization. So we're really seeing an impact at the sort of native data worker level in terms of this environment complexity and the lack of intelligence around that data, the time that they're spending. And we're seeing that 80-20 rule, which is shifting in the wrong direction to an 85-15 where they're spending that 85% of their time just trying to figure out what data they have, what it means, is it fit for use, and then how to use that to meet the need or the use case that they have in front of them. And only 20% or 15% of the time actually delivering insights and value to the business. That's a ratio that cannot stand and really needs to be moving in the other direction. And then this last piece out of their survey was really, really telling to me, which is on average data workers are more often unsuccessful than successful in their tasks as they go around and do that. So not only are they spending an inordinate amount of time doing the grunt work of just trying to figure this stuff out, but they're still not actually getting it right, which means now we're deploying defects into the enterprise, which affects operations, but it also affects the perception of your organization internally for the people that are using that data and potentially getting burned. And then depending on how your organization is configured, the visibility that outsiders, customers, partners, those types of people, and how they see that lack of success is going to also impact your reputation and how they perceive you as a trusted partner in the business. So these are pretty important things. They're very, very impactful, and they really, really truly need a solution that's going to help them avoid this and change some of those numbers so that when they go forward, they will take a data-driven approach because they know that they can trust the data that they're fluent enough in that data to be successful in what they're doing and that it's going to have a positive, not a negative impact on the business by taking that approach. So this brings us to why folks are looking at platform modernization around their data. There's the larger benefits that the cloud brings in terms of performance, scalability, that elastic model and agility that they have where you always have enough compute power and horsepower behind what you're doing. You don't have to recognize that, make huge plans and processes and projects behind ensuring that you're ready for that. The cloud provider is taking that off of your back and really giving you those capabilities with the technologies and the process that they put in place with their cloud, giving you an opportunity to really see a lower total cost of ownership and a future-proof environment that's not going to be significantly adversely impacted by anything that comes along in this world that we live in. At the end of the day, it sets you up to truly have the opportunity to get more value from your data as an organization, which is the end goal. Then when you look at the azures, the snowflakes, AWS, and there's more of them out there, they're really bringing together not just a new environment with servers and compute power and all of those things in a great cost model, but they're actually bringing together key capabilities, allowing you to centralize, simplify a lot of the things that you do around data, whether it's a high-performance data store that allows you to have those hybrid modalities. I just saw something come up. Google, absolutely Google. I'm funny. I always like to put three things on a slide, but Google actually is also offering cloud data platform type capabilities as well. I didn't mean there was a specific reason for leaving them out. You're seeing agile data integration built in and integrated BI and analytics when we look at Microsoft and Azure, what they're doing with Synapse and the data factory, it's very, very powerful combinations. Now, again, simplifying not just from a business perspective, what technologies you have, what technologies you're paying for, what technologies you need support for, but also putting it all into an environment. The job of integrating that and tying it all together and making it work becomes less complex because it's integrated out of the box. It's made to work together. Now you just got to find the right capabilities on that platform to set yourself up for success. This is a big part of transformation, innovation, and business continuity is really taking advantage of this ongoing maturation of the cloud environments and this emergence of people that are focused in the cloud on delivering not just a place to host an application, but a real data-centric capability that can be a game changer in an organization. Exciting times. As always, when we have exciting times because I never try to go too far to one end to the other, we have to then say, but oops, there might be some hurdles or challenges to realizing that. The two that we're seeing from customers that are out there and folks that we're dealing with is twofold. First, it's great if you're an organization that does not have a robust legacy in terms of data management. It's easy to go to the cloud and then start looking at what are we going to build so that we can deploy that out there. That's not an easy job and you still need tooling and capabilities to do that in an efficient and effective way. At the same time, as you start to bring in the complexity of migrating things that are serving the business today and moving them to that new platform, it's a real challenge and that challenge is twofold. First, the migration process, delivering the time to value that the business expects, making sure that you're moving thing from one environment to the other in an accurate way where the integrity is maintained and there's no loss in capability or business continuity. Then, of course, cost containment because those are complex systems sitting in your house that aren't always necessarily well documented. That's where governance and intelligence comes is the need to maintain it in house. Now you're trying to take that and lift it and put it somewhere else into this new technology without losing anything and while trying to gain the benefits of the new capabilities that drove you to that platform in the first place. Then, the next one, and this one is as important if not more important because a migration project is not necessarily a one-time thing. You might be doing that in a stage fashion, bringing things over, but the impact of that is, to a certain extent, time boxed or time bound. Data governance and intelligence, nothing changes just because it's out in the cloud doesn't mean that your data is any less valuable or any less vulnerable. Providing transparency and visibility through that migration and process, making sure things are traceable back to the way that they used to be is very important. Then, documenting these cutting edge technologies and then potentially integrating that documentation and that knowledge base around that with other technologies that may be still sitting on prem and then taking that whole ball of wax and starting to democratize that out to all of the people that might care. There's other interesting numbers out there around data literacy in organizations. Gartner just put out a very good report that had some exciting numbers in there where they're really looking at codifying data literacy programs in their organizations. One of the biggest challenges they have there is the foundation or the facility that will enable those literacy programs to be successful. It's not just the data scientist or the data architect or the business analyst. It's everyone in the organization being able to have a common language and being able to really leverage that language for success, whether it's somebody in management knowing where their data comes from at a system level or somebody who's a data architect knowing the details of how data changes from one place to another as it goes on that journey through the organization. All of those questions need to be answered and organizations are seeing that as, again, another key enabler out there. These are two huge challenges. Getting things over to this new platform is tough enough. Making sure that you understand how it got there and can actually prove that and show that to people and show them what they have in this new platform so that they can make the most of it is a major risk to that time to value proposition. When we look at modernizing the data architecture and using automation, specifically metadata activation or taking a metadata driven approach, which is how we do things, there's some key areas there that can really bring you closer to realizing the time to value, ensuring a lot of accuracy, reduce the manual touch and the costs behind that, and have governance through that process and in place so that it's governed on day one as you deploy this out to the business. What we're going to look at is, first off, transforming and deploying a schema to these new data management platforms. You have a lot of different technologies that might be out there, a lot of databases, and you moving it to this because they have high performance database capability, the ability to bring in different formats and store them in one environment as opposed to having specialty databases for this job or this format specialty databases for that. Then you've got to move that data and make sure that the data comes across from the legacy system into the new system and is ready to go with all of the integrity that's required. If you're going to maintain the data movement type of technologies that you have today, you have to at the very least repoint them to this new environment and make sure that, again, you're not losing something on the ongoing loading and maintenance of that. In a lot of cases, because of the nature of these new platforms and the nature of the need in business, you need to actually replatform all of that data movement logic and processes to take advantage of new technologies and new capabilities that they offer. From that, making sure that you have a repeatable automated DevOps process so that sure it's great to get things to this new environment, but if you cannot change that environment and meet the next business requirement with the speed and agility that's required, then, again, you're still no farther ahead to realizing that time to value and that increased value from data that organizations are looking for. Let's take a few steps here. First off, modernizing the data architecture, migrating those database structures to the cloud. One of the best ways to do that is using a data model. The reason behind that is, again, it takes a lot of the manual work out of it. There's a lot of automation involved in data modeling, the ability to reverse engineer and read that metadata and create a useful and usable graphical model to start working with. Technologies, data modeling technologies have a transformation capability to allow you, because our tool and a lot of the tools out there will support multiple databases and multiple database types using the same technology we surely do, so that you can then retarget those things and leverage the technology to transform tables, columns, constraints, data types, naming standards, all of those things to make sure that's done quickly and effectively without having to have somebody go in and basically recreate the wheel. Then, of course, the ability to forward engineer out of those technologies and deploy that new schema with all of the things transformed that are required is very, very powerful. On top of that, you're starting to create a very useful logical model on top of that. If you don't have models for your databases, now you can start to build in all that business context and start to do some of the grunt work that's required for governance intelligence, things like classifying your data, that data modeling tool support a very robust process of interrogating and discovering things about your data and documenting them as part of the overall design and deployment. Strong capability to quickly take a structure and repoint it to a new technology and leverage that, maintaining all the consistency that's required, but getting that done very quickly, and then the ability to then use those models to start working on other aspects of the project as you move forward. Big, big, big first step is getting those structures moved across. Whether you're taking a legacy data warehouse or maybe a data lake that you've put out into one environment and tried to implement on-premise, and now you want to take advantage of the cloud to do those things, a data modeling approach is going to get you there faster with a high degree of integrity through that transformation process. Now, I need to just provide a concept here as we start to move forward into the other aspects of it. It's this concept of data mapping or data documents. Now, most organizations are doing mappings at some level, but what we're finding is most of them are not using technologies to do data mappings that provide them all the utility that they need to be very, very successful. So really data mapping and capturing data mapping, and I look at them as the logical model for data movement in your organization or logical models. This is where you're really going to be able to capture that metadata, but have it in an environment where you can actually activate that metadata for maximum utility. So, you can capture data movement. With our connectors, you can scan and auto-document the code. So, the majority of ETL or procedural code or newer big data scripting code type environments, they have XML that they produce and we scan and auto-document that and bring that into a mapping environment that's very, very powerful and has a great amount of utility where you can abstract from any given technology and keep the essence of source transformation and target. And then push that forward, whether it's transforming it and using another smart connector to target it at Snowflake or Azure or Telend or whatever that new technology for data movement or taking stored procedures and any other procedural code that you have in your database systems today and moving them into that new data environment, you can do all of that in an automated fashion using these mapping documents, but then those mapping documents go far beyond just switching and generating code in a new environment. They also become the foundation to discover and render your lineage because really, this is a true document of how data physically moves through your organization, which is what lineage represents, right? So forward and reverse lineage at a great level of detail and understanding that, but without having to actually go down and go back and figure out lineage for all of this data, it's there to be queried because it exists in these logical mapping documents. And then of course impact analysis, where are these things used, who's using them, what is it used for, a huge foundation for establishing the value of data in your organization? So, and you want those mapping, I'm just missing the button here. There we go. And as I said, those mappings, they exist in spreadsheets, they may exist in a lot of different places, but those places don't provide the utility. So you really want that into an environment that allows you to, again, leverage a graphical approach, a drag and drop approach, and bring intelligence into the mapping to accelerate that work. So, mappings are great when you auto-document them, that's what it looked like in the past, but those things are going to change. So you want to really bring those mappings into an environment that is fully capable, has life cycle control over it, so that you can really start managing that mapping process and making sure people understand what are the mapping that are in progress, which are the ones that are published, what replaced the other one, be able to see the traceability through that entire process as your data platform changes over time with new business requirements. So very important piece of this is making sure that you get that documented into an environment that has a large amount of utility. In that, oh sorry, in there you can then quickly bring in those models, or if you've deployed the databases that you've transformed using the modeling technology, you bring that into this environment and you use the automation in this mapping environment to automate the direct load of old data into new data. So these are fairly simple mappings in terms of source and target, not a lot of transformation in there, maybe some to deal with anomalies between the different platforms, but quickly being able to establish mapping from your ASIS system to your 2B system and then generate that using a smart connector into code to build and automate the job of bulk loading your data from one environment to the next. Also that would be the environment that you would also repoint and again automation in that environment will allow you to see if source or target has changed and it will make the changes to your mappings and allow you to bless those and publish those new mappings. So now you've also taken care of simple repointing of ETL or data movement if you're staying with the same technologies that you have today. But now we get to the one that's a little bit more of a beast to tackle, which is really automating that conversion of ETL processes and logic and moving to these new technologies. So you're not just going to change the data source and repoint your existing data movement to that, you're actually going to lift what you have in terms of data movement processes and redeploy them on a new technology. This is a real time and cost black hole if you will or money pit and let's call it the money pit because it's just so complex and generally these types of processes are not very transparent in terms of easy to understand and easy to understand how they move from one to the other. So we've established a very strong, strong process that starts with a automated ETL migration complexity assessment. Sounds like a lot but really what it is is it's about understanding what you have today and then understanding and categorizing and analyzing those things so that you can really understand what's in them and how complex they are. But also through that process, understanding the commonalities and the disconnects between the different platforms that you have, it also becomes a great tool to feed your business case and put your cost justifications into the app to drive forward to get the support that you need from the business to do this. And then from there again it's auto documenting, reverse engineering, the legacy ETL technology, reading the code, bringing it into those mappings, using the smart connectors to point those to the new technology and then obviously testing that at a unit test and you'll see as we go through, this is sort of a sample of some of the artifacts that we create through this assessment. So first there's a complexity distribution. So there's some very simple things that are taken from here, concatenated or something or maybe cut off a few bites at the end of it and put it over there. Very simple. Moderate may have some levels of transformation but again fairly simple. Then you start to get into the complex and very complex. So now you have a clear understanding of what you have in terms of data movement processes and jobs out there and really can understand the distribution of them and understand what is the task in front of us. Then we start to look at low design patterns and looking at commonalities between them because as we go forward and configure these smart connectors to automate your job, the real value in that automation is the number of jobs that you can get done with a repeatable process not having to do something specific for each. So we'll do a deep analysis in terms of what you have, the patterns behind them and how we can automate and transform those in bulk and then it'll also show you what are some of the ones that are just not worth taking the time to automate. They're very unique but there's very few of them. Maybe those are the ones that you put your resources on to actually go through and do that job and that's a sliding scale and that becomes part of the decision-making process in terms of potential cost savings, risks and all the rest of that. Then frequency of those components in terms of how much reuse we can get through the automation process, also what components you have and then if you look down in the bottom left there, this is moving from one environment to as your data factory. Now we're seeing that there's no equivalent in ADF in terms of out of the box transformations. So these are going to have to be custom things that need to be written. Again, can be done through automation but is more complex but it really allows you to understand what's the commonality between those things and where is the real work going to be done. And then of course once this is done it pushes out a complete project plan that lets the customer understand what's going to be done, when it's going to be done, when they need to have resources to do, go beyond the unit test and go into acceptance testing, load testing, all of the rest of those things in that new environment. So there's real clear visibility in terms of how you're going to get there, the steps, what's required and the timeline and again through this process we've been consistently be able to show people how we can cut these projects down by 40, 50 and have actually seen up to 70 and 80 percent of the time depending on the complexity of the environment they're trying to transfer over. So very, very powerful, not just from a technology and enabling capability but also enables the decision making behind that so that you can really have a clear visibility into what you have and what it's going to take to get that over there. And now you're starting to take things that were written for IBM Informatica, all of the tools that are out there and really starting to take advantage of these new technologies, cloud-based ETL, spark-based, big data initiatives and do that in a very consistent way because it's machine-generated code, it's done on patterns, it's standardized, it's just it's an immense amount of time and costs that are saved through that process. And then, you know, I know we're getting closer to the end so I'm going to keep moving along. This is, you know, once the other backside benefit of this is that you've taken this migration through this process but the thing that enables that process is also the thing that enables governance. So it governs the process but it also puts everything in place on day one so that you can really start to govern and promote, you know, literacy, fluency and capability around this new platform, you know, on day one because the same tools and the same capabilities that you've used to actually do the migration have left you a very robust, rich and rigorous documentation upon which you can build, you know, that governance and intelligence framework that's required. So, you know, what you're looking at here is a capability that we have called the MINDMAP and that MINDMAP can be used on anything, any element of data or any, you know, terminology around data that you may have in the business glossary, terms, policies, procedures or any other custom business assets that you have that you've associated around to bring context to your data. And from that, it queries all of the metadata, all of the entries into the business glossary and all of the entries that are there in that business asset framework and shows you the connections in a very easily navigable diagram that you can then start to navigate and drill down and start your journey. All right, so this is available to you just by the fact that you've documented all of this stuff through that migration process and then leverage some technologies to put business context on top of that. Now you've got a capability where somebody can come in here and look at customers and look at the business terms, look at the business policies, but also drive on the left side into the actual physical technical data assets and then look at, you know, things like, you know, the restriction behind them, you know, is there PII or sensitive data associated with it, all of that becomes clearly available and for people to use and understand and start to navigate their journey through the corporate data to become a better data citizen. And again, leveraging AI to automate the connection and association of this framework really speeds up that process. So, you know, this example here, what we're looking at is, you know, taking some terminology that's in the business glossary and having, you know, an AI machine learning process go out and sniff through all of the metadata and then bring up the candidates that are there, rank those candidates based on the things that this, you know, this capability learns through the process and then, you know, your job is to then say, yes, no, this should be associated, no, that shouldn't be associated. Very, very powerful capability in terms of taking the manual analysis out of it and really accelerating the ability to put a framework over top of this new platform that's going to be very, very beneficial to your entire business moving forward. Talked a little bit about lineage, you know, lineage is a challenge. We had a survey that we did late 2019, came out in early 2020, and lineage still becomes or still sits to the top, along with, you know, trying to reduce data preparation times with one of the biggest challenges that are out there for organizations of all sizes and all stripes. And the reason behind that is because it's A, complex, B, lineage is different, is a different thing to different people in the organization. And lineage needs to be consumed in different ways by those people, because, you know, a business person may need a specific view, they don't need all the technical noise. An architect or somebody, you know, developing, you know, new data movement processes needs to have a significant amount of detail behind that to be effective. So, again, by documenting all of these things in those mappings, now you can go and query those mappings or the tool, in our case, the solution queries those mappings and provides you the level of detail that you want, and then the ability to navigate and leverage these lineages, you know, for business benefit, you know, a great sample use case is a data steward going out and looking at something that's been classified as, you know, sensitive data, maybe it's PEI, maybe it's GDPR, you know, depending on how they've set up that framework of tagging different and classifying different assets, they can look at the lineage and say, okay, it's tagged as PII in the data warehouse, but, you know, does that go all the way back to the source system, or does that move forward into the business intelligence systems that we've also documented in this environment so that everyone has full awareness and we make sure that all of the things that, you know, privacy and security that go along with that are following it through the journey, you can actually leverage lineage to then say, you know, go back on the reverse lineage or forward and update all of the, you know, attributes that are feeding this with that same classification. So, again, you know, saving a lot of time and effort and analysis and pain and the potential to miss things using the technology again, because you've started by documenting it in the right place to enable your migration, now you're driving, you know, you're driving that forward into governance. So, you know, talking about classifications and, you know, the ability to, for people to look at those things and see, you know, where the classification or sensitive data is across their organization and then using this as the ability to drill down and start to work with that effectively. So, I think we're coming up on 10 minutes. So, you know, when we look at this, you know, I've mentioned smart connectors. This is a technology that we've generated that is really focused on, you know, these four key areas which is, you know, reverse engineering to auto document, forward engineer to generate code, to integrate ecosystems together in a meaningful way and pass information back, you know, connect to other automation environments around testing and things like that so that, you know, you can be part of the bigger picture. And we have smart connectors for all of these technologies and many, many more. There hasn't been, I think, a data movement environment that we haven't been able to work with that we've been asked for other than maybe ab initio. So, I haven't looked in the questions to see if ab initio is there. If ab initio is there, you're just like with everything else you might be in a little trouble, but we can definitely still help. But it's these smart connectors that give you that capability and a smart connector just so that you understand is not just a single smart connector for a specific vendor or technology. It is a smart connector that becomes the foundation of, you know, some customization and configuration that becomes specific to your environment, to your needs. So, very, very powerful supporting and really allows you to get your arms around that sort of data movement, you know, data in transit, if you will, that has always been such a challenge for all of us in this organization. Again, I remember back when I was building data warehouses back in the old, old days, you know, the biggest problem was always how does he get things to move from one place to another? What happens to it on the way? How do you change that with integrity? And then how do you, you know, articulate that to the people at the level that they need to understand it to answer their questions that you constantly get asked as time goes on. So, you know, just to sort of bring it all together in a nutshell, it's really a set of capabilities that you need around these platforms to get the most value out of them and to get to that value to the finish line or the start line really when you think about it as fast as possible. And you need this combination, you know, this modeling, governance, intelligence, transformation, automation, cataloging, all these things together, you know, with a lot of, you know, capability to manage that metadata, leverage the metadata for benefits across the entire scope of the business and the ability to really easily take that and transform that into meaningful messages to all of the stakeholders across the organization so that they can come forward in a self-service way and become more fluent, more literate in the data, more effective in using it for business benefit and at the end of the day become much better data citizens. So, Shannon, I think with that I think we can move to questions. I'm not sure why I have discussion on the end, but me and you can discuss, Shannon. We could. I may not have much to add, but... Well, but, Danny, thank you so much for this great presentation, as always. And just to answer the most commonly asked questions, just a reminder, I will send a follow-up email to all registrants by end of day Thursday for this webinar, with links to the slides and links to the recording of this session. So, Danny, diving in here, how is business continuity being defined when the business is not aware of greater capabilities? Sorry, I missed the last part of the question, just broke up on mine. Sure. Yeah, how is business continuity being defined when the business is not aware of greater capabilities? Well, business continuity, to me, is at least the ability not to stop or not to be forced to stop. So, when I talk about business continuity, it's really about making sure that when something like COVID comes along and everybody has to go home that we don't just put the shutters up and stop that, right? But then business continuity goes to the next level, which is, we have a community in our organization that's relying on our existing data warehouse in order to do things from an operational perspective, from a strategic perspective, a decision-making perspective. And if we're going to go to some new capabilities, they don't want to know about it because they want to be able to still get what they need. And, you know, without being told, sorry, you're going to have to shut down for a period of time because we're moving over to this side, where I think it really starts to get exciting for those folks that aren't aware of the new capabilities is when they see the response to their new requirements and their requests coming in for different things that they need to be successful in their job. So, you know, I think in a perfect world, you know, part of business continuity is to provide a superior capability to your business without having to, you know, really tell them too much about it other than the fact of, you know, these are the benefits, this is how much it's going to cost, and this is when we can have it for you because that's truly business, you know, continuity. You know, I remember in the old days trying to, you know, make a simple change in the data warehouse and, you know, you would tell them, you know, the period of time and a look on their face would be shocking. You know, the beauty of these new technologies and some of the new approaches that are out there, you know, things like data, well, data vault is not new, but it's, I think, being refined and becoming, you know, much more popular and seen as a route to agile, you know, data aggregation and data warehousing, you know, it's really about the business response that they get. And if you're doing your job right other than the folks that hold the budget, you know, the people that are using the facilities that you build for them from a data perspective, the only realization that they should have that something's changed is that things are getting better. They're getting what they want faster. And maybe, you know, they're using some new tools and technologies to do that. Those tools and technologies should be more intuitive and more easier for them to understand. So again, nothing that's going to be a burden for them, you really want to be able to deliver this, you know, almost sight unseen to the person who's average out there. And they're just feeling that life for them is getting better, easier, and allowing them to be much more effective. I hope that answers the question. Yeah, certainly. So, Danny, a couple slides back on your smart connector slide. What about FME for GIS data, for geographic information data? Again, you know, we, our connectors are really working at a sort of metadata level. But, you know, that that type of data is in a system. And, you know, and that system is probably, you know, has some some unique qualities around it for it to be able to do those things. So, you know, not being, you know, an expert around that space. Again, I'll kind of repeat what I've, you know, what I found in moving forward with this. Our lead in this world, who is just a genius and a real, real good guy, John Carter, he has not, you know, sort of met a use case or a sort of been able to not been able to deliver a capability in terms of bringing things together and leveraging those things to either move them or to automate them through the process. So, you know, what I would say about that is, you know, this is a route to do that just as it's been a route to do a lot of other things. I would just advise people come, you know, reach out. We're very, very friendly. And let's have a talk about it, because I haven't seen something that where we've done that yet. But again, I haven't seen something that has come in that we haven't been able to do for the customer, other than maybe have an issue from an ETL. And if anyone's out there from have an issue, I'm really sorry. So, you know, this approach is very reusable across, you know, different things. It started out as simple ETL tool to ETL tool. Then it started, you know, moved into BI worlds. And then it started moving into, you know, the big data world. And every challenge that we've seen come in front of us, we've been able to overcome that and deliver, you know, what the customer wanted. So, I would say, you know, reach out and let's talk, because I think you'd be excited to see what the capabilities really could be. So, Dana, we just really have a minute left. But I want to see if I can, we could probably spend a whole webinar on this next question, but see if you have an elevator pitch. What are the best practices to avoid overspend on migration to cloud platforms? And how do we avoid those pitfalls? Well, you know, the best practices that I've seen organizations, you know, the ones that have been successful is, you know, make sure that you start with a manageable scope. So, you know, some people want to take everything they have and just drop it onto the platform and say, let's go. And generally that doesn't, I won't say it doesn't work, because again, depending on the complexity and the sort of, you know, the size of what they're trying to move, you know, they may be able to do that because they're small enough to do that. But what we've seen is most people are looking for a specific new capability on the cloud. So, whether it's, you know, implementing a data lake that they may not have had before, or migrating a data lake that they may have put onto, you know, Hadoop on-premise or something like that, and then looking at the cloud to be able to do that. Because what we found is, as you move things across to these platforms, there's a lot of reuse in that process. So there's, you know, a little bit of learning and a lot of reuse. So, you know, start with something that's impactful enough that you're going to be able to measure the benefits, but not too big that you're going to bog yourself down and really just, you know, get sort of funded out, you know, swamped under by volume, and then, you know, leverage that and, you know, create that repeatable process that you can start to bring. So, you know, take your data warehouse across or your data lake and then, you know, start to patriot, you know, different applications that make sense and, you know, over a period of time. But incremental proof of success and then repeat success, I think is as close to best practice as I can deliver. That's perfect. Well, Danny, again, thank you so much. It's always another great presentation. Thanks to our attendees and being so engaged in everything we do. But that is all the time we have for today. Again, I will send a follow-up email by end of day Thursday with links to the slides and links to the recording of this session. So I hope everyone has a great day and stay safe out there. Thanks, Danny. Thanks, Shannon. Thanks, everybody.