 Hello and welcome. My name is Shannon Kemp and I'm the Chief Digital Officer of Data Diversity. We would like to thank you for joining this Data Diversity webinar, keeping the pulse of your data, why you need data observability to improve data quality sponsored by precisely. Just a couple of points to get us started. Due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, we'll be collecting them by the Q&A or if you'd like to tweet, we encourage you to share some questions via Twitter using hashtag Data Diversity. And if you'd like to chat with us or with each other, we certainly encourage you to do so. And just to note Zoom defaults the chat to send to just the panelists, but you may absolutely change it to network with everyone. And to find the Q&A or the chat panels, you may click those icons found in the bottom middle of your screen to activate those features. And as always, we will send a follow-up email within two business days containing links to the slides, the recording of the session, and any additional information requested throughout the webinar. Now let me introduce to you our speakers for today, Julie Skeen and Mike Cisselec. Julie is a Senior Product Marketing Manager with Precisely. She has over 25 years of experience working on solutions for customers in data intensive industries. She focuses on understanding customer needs and ensuring precisely data quality and data observability solutions are aligned with those needs. Mike is a pre-sales consultant for Precisely and has been working in the data management space for over 20 years. He specializes in data quality, data governance, data integration, and big data. And with that, I will give the floor to Mike and Julie to get today's webinar started. Hello and welcome. Thanks, Shannon. Appreciate it. I want to thank everyone for joining us today and let you know how excited Mike and I are to be with you today and talk about data observability and how it can help to improve your data quality. Next slide. Next slide. Go ahead one more. So the plan for our session today is to share some introductory information about data observability. We're then going to discuss how data observability works and we will show you some use case examples in action. And we'll of course have time at the end to answer any of your questions. So with that, let's jump in. Here you see a few statistics from Forbes, the Harvard Business Review, and Precisely's own data trend survey. Looking at these, you can see that when two-thirds of organizations say silo data negatively impacts their data initiatives, and almost half of newly created data records have at least one critical error. It is no wonder that 84% of CEOs doubt the integrity of the data on which they make decisions. So let's learn how data observability can help. There are a number of business challenges that occur that can be improved by a data observability solution. So let's see if any of these sound familiar to you. Maybe something goes wrong with the data pipeline that impacts downstream operations or analytics. You might experience this as an email from IT saying that your BI tool is unavailable. Do you ever experience a lack of confidence in decision-making based on the data that's in your BI tool or advanced analytics processes? Does your team ever find that writing scripts or other manual methods that were used in the past to look for operational data issues no longer scale as your data volumes increase? If any of these challenges resonate, then your organization can benefit from a data observability solution. So what is data observability? Observability itself is not a new concept. You can see it started over 100 years ago, and it really is a key concept in many process methodologies, and it's used in industries such as manufacturing as well as, of course, software development. What is newer is applying these concepts to data, and that's what we call data observability. So data observability ensures reliability of your processes and analytics by learning you to potential data integrity events. It answers the question, hmm, is my data ready to be used? So when we say use, what do we mean by that? Well, we mean anywhere that your business depends on the data being accurate, and obviously this can mean a lot of different things to different people. So as example, if you are dependent on a BI report, you may ask, is the data that's feeding my reports correct? Or if you're a data engineer or IT who's moving data through an ETL or ELT pipeline, you might want to know if the data is being transferred correctly. Or maybe you're data scientists or in data ops building advanced data science models, and you want to know if the models are reflective of recent data changes or maybe they need to be retrained. So take an example of a simple process. Say the finance department makes business decisions based on a daily report of the online orders that have come in. If for some reason the data that is coming in and feeding this report from the source systems is incorrect, now all of a sudden the insights and outcomes based on the report will be flawed. And that obviously has negative downstream impacts. So that's what we mean when we talk about data observability. So then the next question is, why is this important now? What is change? What makes the difference? So we all know that businesses are using data for more purposes and ultimately becoming more dependent upon it. I liken it to the example of driving a car while looking at your phone. Well, the primary goal of driving your car should be to get from point A to point B safely. But if you're constantly inundated with distractions that might be chirping, buzzing, and vibrating from your phone and now you're trying to look at your phone while you're driving, well this can obviously cause you to veer off the road or collide with another vehicle. The same applies to data in your business. So obviously your objective in your business is to run your business. But if you're constantly distracted worrying about data issues that might happen, then you could potentially drive off the road. Similar to gauges in your car, you only want relevant alerts to help you drive your business successfully. These issues are more relevant today for a variety of reasons, but we think they can be gripped into two main categories. And those are data proliferation and technology diversification. So what are those buzzwords mean? Well, by data proliferation, I mean there's more data. There's a lot more data. A Forbes study estimates we're creating 2.5 quintillion exabytes of data every day. And feel free to Google exabyte if you need to because I did. And this is spanning a variety of locations such as cloud on-prem or hybrid cloud, not to mention the movement of data across these locations. By technology diversification, I mean pivotal business transformations initiatives are empowered by amazing next generation tech that represents a whole rethinking of established legacy systems. These efforts almost always span a diversity of vendors, applications and technology such as streaming, IoT and AI and ML. Data consumers, users and producers cannot take their hands off the wheel to validate that the data is ready for use as they need to stay focused steering the business. Data observability enables the business value not only for fast insights to allow quick decisions, but also when the data being used for insights to make sure it's trusted. So it's one thing to identify data issues, but more importantly, data issues need to be corrected before the data is used in making decisions. Data issues will happen. There's no system or process that's perfect, but proactively addressing the issues prevents them from impacting the business. As you can see in this picture, data observability shines a light on your potential data issues with passive user interaction. This captures the essence of data observability and just how simple it is to shine a light on a potential problem and change course versus later having to salvage the wreckage. If there's one takeaway from this overview, please remember data observability is proactive and intended to improve data reliability and reduce data downtime. Using a variety of techniques, data observability surfaces issues in source systems before they become significant. We're going to show you a few examples of those techniques today based on volume and data drift detection methods that answer questions such as is my data ready to use? Do I have all my data and do I have the right data? And with data proliferation and technology diversification at play, reactive methods, they simply don't scale. They're error prone and they're resource intensive. And as the adage goes, the balance of prevention is of course worth a pound of cure. So the process of managing a data life cycle and data journey and monitoring an enterprise has become incredibly sophisticated and complex. It's not out of the ordinary to see thousands of pipelines and transformations spanning hundreds of data sources. Data quality is often validated at the final delivered stage. Comparing this to a traditional manufacturing process, it's the equivalent of ensuring the quality of the finished product with a post manufacturing inspection of the product. As you can imagine, this process is incredibly costly from both the time and risk and cost of goods sold and materials perspective. The same concept applies to your typical data products, things like analytics, reports, applications, pipelines, any process, driving an outcome, and the results are the same. For data pipeline, it means a stakeholder is finding the issue and reporting it to the creator of the analytics. Again, it means having to go back to an earlier stage after the production is thought to have been complete. Next slide. So contrast the process with what it looks like when you add in data observability. Now data observability enables the user to visualize the data process and see deviations from the typical patterns. What that means in this example is you see a typical data process that spans multiple data sources and transformations. This here, you're seeing is a simple view, of course, but in reality, there could be hundreds of different transformations spanning many different data sources. And as it moves to the pipeline, it's observed at each stage ensuring the entire process is stable. So in this example, if you take a look at data source three, as you can see, it's applying enrichment as well as blending and merging of data. You can see there's some sort of anomaly that has the potential to now jeopardize the final data product. Catching it early in the stage allows the appropriate resource to identify the issue, assess and resolve before the data is made available for consumption. Many studies have been published validating the cost savings of finding issues earlier in the life cycle. This cost savings can be significant. So the early resolution eliminates wasted time and resources on later stage of the pipeline, not to mention the risk of negative business outcomes. It's critical that the data issues are discovered and remediated before the decisions can be made based on inaccurate analytics. Okay. And with that, I'm going to hand it over to my colleague, Mike, who's going to show this in action. Mike. Great. Thanks, Julie. I appreciate it. Yeah. So what I'm going to do is talk about a couple of different use case examples where data observability applies and show you how this all works. So we have three main sections this has broken down to. I'm going to kind of jump over to the demo and we're going to come back to these slides a little bit in a few moments. So let's talk about the, let's go to the demo now. So what you're seeing here is a view of the data integrity suite. This is an instance that's live and interactive. I'm on what we call the observer section. So an observer here, sometimes people think of that as a person, but really this is more about an observation that you set up for machine learning slash AI to be able to look at the data, be able to notice different anomalies in the data and identify those anomalies. So we have them be able to let us know, be proactive, as Julie mentioned earlier in the webinar. So what you can see here are different observers that have been set up. There's different sets of alerts, both red and yellow. To create these observers, it's pretty straightforward. If I wanted to create one, I can just hit the create button. In addition to that, I can easily edit them as well. So if I pick one of these to edit, it'll bring up information that's available. I can also go at that point and look at the couple of the main facets of this right now. We're looking at volume and data drift. If I wanted to step into one of these, I can configure these and for volume, you have a couple of different options. You can build it up for confidence-based alerts. So you can set the threshold. You can decide where this number is in a sense. What constitutes an alert? What percentage or what comfort level do you have for these? And as a matter of fact, you can also type them in and change it here as well. So there's confidence-based alerts, and then there's the threshold-based alerts where I can choose that and look at it from a threshold perspective. How much of a change is there? What kind of volume change in data is an issue where I want to be able to raise a warning or have a critical alert, or also non-alerts can be displayed as well. Now, we can look at the same kind of scenario for data drift. And this looks at, as you can see, some of the different types of statistics like minimum, maximum, mean, standard deviation for text, min, max, length, distribution count. And again, the thresholds for this can be set and decided upon using the slider or making the change. So this gives you an idea of the observers you can set up. And then at that point, you can take those observers and have the alerts that are created. Once they're put in place, they will give you a set of alerts over time. Of course, machine learning will happen. If I click on any of these, you'll see some changes for these. I can change the, to look through pages of these as well to look through them. So let's look at it from a use case perspective. I have one of these types of alerts here for currency. And I can see that there's been some changes for the various different types of currency. It looks like the distribution value count has changed dramatically recently. So in the past couple of days, there's been a huge uptick in these different types of currencies. I can expand this chart and take a look at it in detail. Perhaps I can want to look at not Hong Kong dollars. So let's say we want to just look at Australian dollars. And we can just see the chart for that particular set of currency. I can also look at this from a perspective of, I can look at this from a perspective. Great, let me pull that over. Mike, why don't I keep going because we had some of those other examples later in the slide. Sure. Yeah. It just got fixed. So sorry about that. Go ahead. You can keep going if you like. Okay. And then maybe when we circle back, you control the entity. Yeah. So on this slide, how does data observability work? So it's really broken down into three main sets of capabilities. Some of that Mike was starting to show you. So to help kind of understand the demo that you're seeing, the first part is the discovery. So in discovering the data, you want to observe and collect the information about the assets through the different variety of techniques and tools. The second portion is the analysis. And that's where the system identifies any adverse data integrity events. And that analysis can get quite sophisticated. It often implements modern AI and ML methods to current massive amounts of metadata and related information. And then finally is the action step. And that's bringing those alerts and insights to the forefront for both manual and automated resolution and is essentially the step to do something about the data issues that may have been found by the system. So let's look a little deeper at the key capability of data observability analysis, anomaly detection. It might not be obvious when we talk about the analysis components of data observability that the underpinnings of this capability set is extensive intelligence powering those insights. So outlier detection for identifying anomalies has been proven to be an effective technique in many use cases. And it's an integral part of data observability. Here you can see a few typical patterns of anomaly detection used in data observability. This is just scratching the surface of AI and ML methods used to determine outliers. If you're familiar with these types of methods, you will see this includes a variety of things such as random noise, step changes, both upward and downward trends and others. Not to mention there are many more complex scenarios that include trends based on seasonality and nuances between data types, such as numerical data, character data, and dates. But if you are familiar with any of those specifics, that's fine. The main takeaway here is there's an extensive artificial intelligence and machine learning that is supporting the anomaly detection. So that while you can build specific roles that you want to see in data observability, you don't as much have to because the system will learn what to expect from your data and will alert you when anything appears outside of the norm. Next slide. So we often get asked, how does data observability relate and differ from data quality? The simple answer is there is some overlap. And in your organization, they may be under the same umbrella. Both focus on metadata and the traditional data quality dimensions, accuracy, completeness, conformity. The biggest difference is how we go about it. Data observability emphasizes the identification of anomaly in data based on patterns over time. This is similar to human inference or how you or I would look at a data trend line and draw a conclusion versus static predefined rules that are inherent in the majority of data quality tools. So we have precisely we offer both data observability and data quality as distinct capabilities, but we also make sure that both sets of functionality complement each other to ensure customers get the most possible value from the solutions and the highest trust in their data ultimately. Next slide. So the final capability set that we were talking about is the action step. This is where you visually see the alerts that have occurred on your data pipelines and what's impacted. And those alerts can proactively be pushed out via notifications. Here you can see an example of a volume alert and the related assets that are impacted by this alert. Okay, now I'm going to go back to Mike with some more use case examples that he's going to walk through and then show you in the solution. Thanks. So talking about the use cases as we mentioned before, you have different kinds of impacts. The one is the impact of unexpected values when the data doesn't perform as you expected to. So what this does is it causes you to have situations where there might be errors. In this case, we looked a little bit for a moment. I'm going to go back to the demo to talk about the an error in currency conversion, exchange rates, et cetera, where the machine learning is picking up on this information and letting you know that it is not the way the data isn't performing the way you expected. This is also happening at a volume level as well. Soon you have unexpected data volumes where the amount of the volume is changing to a degree where you're not expecting it, but you may not realize that it's been changed, meaning nobody human is actually watching this data that closely, but you want to be able to have an alert when things are perhaps trending in a direction so you can be proactive. So let me tap back over to the demo again and bring that back up. So we just began talking a little bit about the alerts themselves. I mentioned about setting up an observer briefly. An observer is again an ML way of watching this data for you. So we have a lot of things where we can look at this. You can see here I have currency, different types of views that we're looking at this data. I'll take a moment to like since I just mentioned currency, we'll talk about that. I talked about that a little bit when I went in, you can see the range of change for it. Let's look at the exchange rate for some of those currencies. We can see that there's different data drift values that have been put in place for these where they've the minimax values have declined or have increased at a rate that's unexpected over time. If we were to look at perhaps revenue, I can just type and look at the revenue as I just started searching here. I can pick any one of these items and they'll let you know when there's been a decrease perhaps in revenue as you were increasing through the holidays perhaps, now the revenue is declining. So you want to be alerted when there's a change in any kind of particular number. Now this is the cardinality, this is the revenue amount, so there's different statistics that will bring these alerts. The mean of course will change as well given that scenario. That's looking at things from a data drift. You can look at things from a volume perspective too. I could have changed the alert type but I can pick a volume and see we're looking at a table here. In this case we can see that the row count was steadily increasing and now suddenly it's flat. Not coincidentally this was over the Thanksgiving holiday weekend so perhaps it's normal to have those rows to not be increasing, maybe the office or the business is closed for the holidays. But in this case there was a steady increase in different types of daily orders and then suddenly the data went flat. So you want to be alerted right away when there's any kind of change in that data and that information that it's sent to you. Now the details for this I know may be a little small on your screen but you can see what type of alert it is, the configuration, if there's a table name, the schema, wherever this data came from. In this case it's in a snowflake instance where this observer is running to watch this data for you. So it's doing this work. Now what is based upon this? What is giving those alerts to support all these use cases that we're talking about? Well that comes from profiling of the data. Being able to take a look at the data and be able to understand what kind of data we have and be able to understand what it looks like, the view of the data. So we were talking about currency before. We can see information about currency, things like nulls, duplicates, unique values. We can see frequency analysis, any kind of patterns, the shape of the data, whether there's special characters involved, etc. All this is available. We can even dive into any of these little charts to see if there's information in a graph or a tabular format. Now this one isn't all that exciting but if I go to perhaps exchange rate I get some more interesting information. So here I have things like min, max, standard deviation, variance, average over time and again any of these can be opened up and looked at in a little more detail. When you're dealing with something that's a numeric you're going to have a different type of histograms available, percentiles and of course numeric information. So this is giving you the profiling information and this is the support for the observability. Now I'm looking at this at the latest bit of information for this. I can actually step and look at previous runs. So let's say we run this daily or weekly. We look at this data and try to determine any kind of changes. I can step backwards and look through. This was on December 4th. If I run backwards December 3rd. I can see any changes over time. I can take a look at this so you can have at your fingertips the view over time for these pieces of data. So this gives you the idea of what you can do with this. This is the orders table. As I mentioned we're talking about orders in focus here, the currency and etc. for those. So I'm going to bring you back to the PowerPoint for a moment and what I'm going to do is give you kind of a recap. The idea behind this is you have data anomalies. They can impact downstream processes. You want to be alerted as soon as possible for these data anomalies. If you have unexpected values, there's some kind of invalid type but perhaps there's an invalid currency type or something like that. It will let you know that this information is getting into your system before it propagates for days. You'll get these alerts right away. If there's unexpected data values because of some kind of lack of communication, perhaps different systems aren't communicating each other, it'll give you that information and you'll be able to know this ahead of time. That's what observability gives you. It has something watching the data for you. It acts as a way of understanding what's going on with the data and letting you know as it's trending before you discover at the end of the month on a report. You'll be able to know perhaps daily or hourly or however you want to view this. You'll be able to know that information. Julie, I think I pass back to you here for this set of slides. Thanks, Mike. Now that you've heard about what data observability is and seen how it can apply to these specific use cases, I want to review the benefits of data observability. Hopefully some of these are apparent to you from what you've seen. First is understanding data help. As the system continuously measures and monitors what's happening, you can utilize dashboards to understand the health across your data landscape. The visibility also extends to the built-in discovery capabilities that Mike was showing you that allow you to explore the data. And alerts are provided when the intelligence determines there's an outlier and that is shown both visually and pushed out to users. It enables you to take action to avoid impacts that may occur based on undetected data drift and shift. And finally, it allows you to quickly remediate issues and integrated data quality solutions allow you to further expedite this process. Next slide. So before we close out, I want to mention the product you saw today is precisely data observability solution. This is a module of precisely data integrity suite. Next slide. The precisely data integrity suite is modular, interoperable and contains everything you need to deliver accurate consistent and contextual data to your business. The precisely data integrity suite is a set of seven interoperable modules that enable your business to build trust in your data. Next slide. The suite has been built so you can the suite has been built so you can start wherever you are in your data integrity journey. This means the modules are designed to be implemented either together or standalone with best in class capabilities. For example, you can start with data observability and layer in other modules over time. So with that, I will, oh, there you go. That's right. Now you can see a brief, you can see a brief view of the modules within the suite. So with that, I'll turn it over to Shannon for any questions that might have come in. Mike and Julie, thank you so much for this great presentation. Just to let you know and to answer the most commonly asked questions, I will be sending a follow-up email by end of day Thursday for this webinar with links to the slides and links to the recording and anything else asked throughout. And if you have questions for Mike and Julie, feel free to submit them in the Q&A portion of your screen. So diving in here, can the data drift observer be set at a data element level? Mike, you want to take that one? Sure. So yes, the data drift can be shown at a data element level. And when you're looking at it, I was looking at different fields, if that's what you mean by data elements, we definitely can do that. If you're thinking about the, and it does show it, if it's looking at it from different facets of like min, max, mode, mean, something like that, assuming that's what you mean by data element. Like certainly, if, Sunia, if you have any additional things you want to add to that question, feel free. But moving on here, is observability and lineage are those tools different? So Mike, I don't know if I want to bring up the lineage at all, but the data observability is integrated within the data integrity suite. So we have data lineage also as part of a foundational capability within the data integrity suites data catalog. So that is provided along with data observability, even though the data catalog is more broad, it applies to a number of different things like cataloging your data integrations, integrating with your data governance information. It also allows you to see the context of any sort of alerts that might have occurred. Is the AI only involved as an observer or can it also be a decision maker after an anomaly has been detected? So let me ensure it clear. Could you just repeat that? I want to make sure I got it clearly. Sure. Is the AI only involved as an observer or can it also be a decision taker after an anomaly has been detected? So that depends on how much control you want to allow. There's going to be capabilities to do that. Again, you probably want to have your AI just suggest for you information. You don't want it actually making changes to your data, maybe until you're more comfortable with what it's producing, if I'm understanding the question correctly. I believe so. But certainly, Amir, if you have any additional things you want to add to that question, let us know. I just want to keep going here, though. So where do people normally land this data after it's profiled? So when you say land the data, meaning that would be their decision, if you're talking about the metadata or that's going to be followed. So I'll answer kind of both directions just to make sure. So the metadata itself obviously lands within the module that you just seen. The metadata will be held in a repository. So again, no actual data unless you want it to be. But you may have some actual data in profiling because you have things like min, max, and those kind of things tend to have actual data numbers or data values in them. But then again, that's up to you. We want to hold them. But the data itself is not being drawn out to run this. So I just want to make sure it's clear. Like for example, if we're looking at Snowflake, the observers are running actually on Snowflake and looking at the data there. The data doesn't have to be moved out of Snowflake to be looked at. It's being run again where the data lives. Yeah. And the question there, do you find, and I believe you answered a lot of this, I'm asking because sometimes it can be a big headache to consume the data and then roll it back if you find someone or find something that indicates it is wrong by a data observability. Yeah. So again, the data is not really moving. So if you find something wrong, the data will be where it always is. So in other words, let's say it's a schema or table in Snowflake, the data is still going to be there. So if you find something wrong, it's not so much a perhaps a rollback. I mean, it's going to depend on different scenarios, but the data will be there and you can make decisions accordingly from there. And did you mention if there was a way to set alerts? So there are. I mean, the observers themselves in the center are what is creating the alerts. So the alerts are based on the observers that you set in place and you decide what level of alert or what constitutes an alert at that point. How does data observability work with other applications? So data observability for precisely as integrated within the data integrity suite. So it integrates with the data catalog, as we mentioned, to allow you to understand the technical context, the technical metadata and lineage and things that the data governance module to understand the business context so that you can really understand what's going on with that anomaly and make sure that you understand it from a business perspective. And then with our data quality capabilities to allow you to make sure that you can remediate any of those issues that have been found in an easy manner. Very nice. And if data from Snowflake is changed in a downstream system or application, can this tool detect and report? So so again, you know, I'm just going to bring up something just for a moment. This is the governance module. I brought up a lineage diagram being able to look at a field and understand where it possibly is used downstream, meaning here it's something used in Power BI. This is the same data in a Snowflake instance. So if you want to be able to, you can conduct in a sense, observe the data in all these different places if you'd like. So in other words, if you think there's going to be challenges at different stops, the observer is not limited to Snowflake. Maybe you want to observe it where it lives in some other repository too. Hopefully that answers the question, gives you an idea of how it can be viewed and tracked as well. Indeed. And what type of user would you expect to work data observability alerts? Sure. So there's different kinds of roles. It depends. Obviously, somebody's going to be concerned with the data. It depends on a couple of factors. One is how mature of a data quality slash perhaps data governance type of program that's in place. If your core focus for data quality, you would usually have data stewards would be. It could also be data scientists and analysts would be roles that would want to be able to keep an eye on this data. They want to know that they can trust what they have. And of course, the trust will transfer over other things like data governance where you want to be able to understand what you have for those particular applications. So you can trust the data in them, whether it be in the reporting tool, data governance, data science, whatever the case may be. So this visualization shows how data is used downstream. Any visualization, including upstream data sources? Sure. Well, it would kind of be the reverse of that. If I looked at the data wherever it lived, it would be, I'll put the screen back up just for a moment. So if I were looking at the data, this uses as a poor example. If I were looking at it here, in this case, this would be the upstream places that we know that today is being used. And of course, there could be observability set up for those various facets of where this data is used. It's simply because this is exactly the idea. Yeah. And if there were more downstream, that you would go to the plus, right? Yeah, and I could hit more, of course, in opening up more if there's more places where I wanted to see where this is being used. Yes. And does the tool integration with a glossary tool like Calibra, or what do you, is there a list somewhere where you have tools you integrate with? So I don't have a, we don't have a published list. I will say this, that the data that you see is available via REST API, or also, of course, export as well. So making a connection to other tools shouldn't be a huge challenge, but an out-of-the-box thing to another company like that would be on a case-by-case basis. Perfect. And just give everyone a brief moment that is the end of the questions that we have currently. And just give everyone a brief moment to type anything else, any other questions that we have. So let me ask you both, what is the aha moment your customers have when they install this? Yeah, well, I think, I can even speak just from precisely using it ourselves, right? You see things that pop up that maybe you hadn't thought of to write a script for before, right? Or a lot of times the way people script issues to say, oh, check for this, check for this is you have to have had it happen before, right? And what customers see as the benefit is the thing that they hadn't even thought to write a rule for or create a script for can be identified with data observability, because the machine is looking at it and saying, oh, what is the typical thing that I would see in this scenario and I'll learn you when something seems off. And I would add to that the scope is what I see a lot with customers, the fact that they might have thousands of schemas and snowflake as an example, or data bricks, and they have tens of thousands, perhaps of tables. And being able to manage and monitor that and be able to understand observability gives them the ability to allow machine learning to kind of do that for them. So they don't, because it's not possible for a human to be able to have a finger on the pulse of what's going on with all that data. We got another question that came in here. So how much does it cost? And can this tool be available on any of the cloud providers like AWS Azure, et cetera? So I'll answer part B of that question, so to speak. And so it can be available on AWS Azure. I have not yet used it with GCP, but we definitely AWS and Azure for sure. The cost part we have to take on a case-by-case basis as well. So it's hard to answer that one here. Sure, it makes sense. Well, that's a great question. Definitely showing a sign of interest there. So, Mike and Julie, that is all the questions that we have for the day. Thank you so much for this great presentation. Thanks to our attendees for being so engaged in everything that we do. I saw that some of you are in DC, if you're at the Data Governance and Information Quality Conference here in DC, be sure to stop by the data diversity booth and say hi to myself and also be sure to stop by the precisely booth to learn more about precisely. Mike and Julie, thank you so much. Really appreciate it. Thanks for this webinar. All right. Thank you.