 Hello and welcome my name is Shannon Kemp and I'm the Chief Digital Officer of Data Diversity. We'd like to thank you for joining the Data Diversity webinar, including all your mission critical data in modern apps and analytics, sponsored today by Precisely. Just a couple of points to get us started, due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, we will be collecting them by the Q&A or if you'd like to tweet, we encourage you to share highlights or questions via Twitter using hashtag Data Diversity. And if you'd like to chat with us or with each other, we certainly encourage you to do so. And just to note, Zoom defaults the chat to send to just the panelists, but you may absolutely change it to network with everyone. And to find the Q&A or the chat panels, you may click on those icons found in the bottom middle of your screen for those features. And as always, we will send a follow-up email within two business days containing links to the slides, the recording of the session and any additional information requested throughout the webinar. Now, let me introduce to you our speaker for today, Ashwin Ramachandran. Ashwin is the Senior Director of Product Management for the Precisely Connect Product Family. In his seven plus years of precisely, he has enjoyed having a front row seat, watching enterprises adopt and react to the progressive waves of technological, excuse me, I'm so all tongue tied today, change taking hold from legacy data warehousing to Hadoop to modern cloud data warehousing, machine learning and AI. He is particularly passionate about identifying opportunities for how precisely he can help customers overcome their pressing business challenges as these waves of technology continue to take hold. And with that, I will go to the floor to Ashwin to get today's webinar started. Hello and welcome. Thank you so much, Shannon, really appreciate the opportunity to present today on this topic. So I guess with that said, let's just jump right into it. So as Shannon mentioned, the subject of today's talk is really ensuring that data for projects, whether that be analytics or building modern applications include all of an enterprise's mission critical data. The agenda that we've got today is pretty straightforward. I'm going to start out by talking about modernization, the benefits and then the associated challenges with reaching those goals. I'll then dive into complexity specifically related to what we precisely see as mission critical data assets. And from there, I'll pivot towards talking about some solutions and how precisely assists are 12,000 plus customers with these solutions to navigate the complexity. And then I'll close out today before we jump into Q&A with a really interesting customer story that I particularly enjoy going through. So with that, let's start talking about the main topic, right? What are some of the benefits of modern analytics platforms? I'm not going to be the first to say any of these items, right? But my goal here is to really summarize a lot of the key things we see our customers looking to get out of modern data platforms, modern analytics platforms, et cetera. When you take a look at modern analytics platforms or data platforms built on top of hyperscalers like AWS, Azure, Google Cloud, et cetera, clearly they offer an ability to get visibility into all of an organization's data so you can get a consistent view of your data that's accurate and solved for a complex multi-platform data environment, right? With that set, you now have the opportunity to enable competitive differentiation, right? Through the variety of services that may exist on these different platforms, you can maybe build a more real-time application that's built on data that comes from across the business and not just an individual silo. Another key benefit we see our customers continue to leverage is an opportunity to optimize their existing spend, right? In nine out of 10 cases, there is a cost savings to be realized by moving from a static, say, warehouse on premises in your data center that's allocated for the maximum capacity that you hit one day out of a month, right? And instead, move towards paying exactly for what you use and doing that at a much more cost-effective price per bit as it were, right? You can handle petabyte scale workloads and much better cost economics. The other thing that I would really point to here is modern platforms definitely allow organizations to leverage market-available skills, right? The IT world has a ton of data hubs that are either homegrown, built by specific consultants and organizations, may not actually have all the skills they need to manage those environments and modernize them and push them forward. So what I would say is with a modern analytics platform built in the cloud, you can not only support the existing workloads that you've already had with in-house market-available skills, but you can also drive some new initiatives around data governance from a unified set of tools. And then I think lastly, really, because we are consolidating silos, you really do have an opportunity to achieve insight at any scale, right? You're no longer limited to just what you can manage and within your own environment. There's a bunch of tools in the tool set to allow you to do that. Now, all right, we've talked about modern analytics platforms. Makes sense. Like I said, I don't think anything I've set up till now is what you haven't probably heard before, but let's talk a little bit about mission critical systems because this is really the core of the talk here, right? When we take a look at mission critical systems, how would you define them? Well, the way that we look at it precisely is as follows. Mission critical systems, as I see it, are systems that are deeply entrenched in the history of the business. What that means is they may have been around for 10 years. They may have been around for 30 years. They may have been around for 50 years. We actually see, given our history, we've got a big footprint within mainframe shops and those systems have been around 50-plus years. They continue to run core business operations that directly impact revenue. So modernizing one of these systems are actually working with the data in those systems needs to be done in such a way that you don't negatively impact your ongoing operations. Typically, these are high visibility systems that have tight security restrictions in place. So there's a very strong governance model around who is allowed to touch data in those systems, in what way are they allowed to do that. But I think most importantly, if you add all these pieces up, a key thing to keep in mind is these systems actually contain really, really rich data assets. And those assets are constantly changing. They are not static. They continue to be updated. Transactions continue to run through these systems. And so it's really, really important to actually make heads or tails of the data within them. Oftentimes, I've, you know, and I think we've used this terminology precisely before in the past, but we often have used the term legacy systems, right? And I think legacy systems kind of slot into what are defined as mission critical here. And really, when you say legacy, I think a short answer for that is it's legacy because it works. It's got demonstrable value to the business. And it's, you know, it's going to continue to be a cornerstone of the IT landscape for that organization. So, all right, if these are mission critical systems, let's dive into one specific set. And this one is really around the mainframe. The mainframe is not a shrinking market. In fact, it is a growing market. The mainframe market's growing at two and a half percent, which most people when we talk to them, you know, they generally think either the mainframe is gone or it's going to go away on some time-bound horizon. But in the interim, the mainframe market is continuing to grow. You know, we see a little bit of if you look at our customer base, we've got really two buckets of customers, I would say at the very high level who have mainframe. Bucket number one is, hey, I've got my mainframe. We've got a plan to retire it. And to do that, I need to first start, you know, migrating data off of it. And then that may be a five-year project, a 10-year project, whatever it looks like. And then bucket number two is, hey, I've got a mainframe and that thing is not going away. Or at least there's no time-bound horizon upon which it's going to go away. So instead, what I really need is the ability to leverage the data that's generated by that platform more intelligently. So that's what I should be, you know, that's what I would say here. If you look at kind of what we see in the market with our customers. In fact, a BMC report said back in 2019, mainframe transaction volumes grew 50% year over year. Right? So this is not just a growing market, but the fact that transaction volumes are actually going up is indicative of the fact that, hey, there is more data getting generated on these platforms than they have in the past. And it's imperative that organizations are set up to deal with the data from those platforms and work with it. All right, so let's keep going here. So the last thing I really want to say here as I make the case for this problem is, you know, how can this data then help contribute to an organization's success? Well, I see this shake out in terms of some key opportunities and then some impacts that necessarily follow from it. So the opportunity number one, right, that second group of customers I spoke about, mission critical systems are generating data that's and those systems are not necessarily going away. So the impact to your business is a need to reliably integrate this data at speed, continues to grow. The second opportunity that I see is with these platforms is oftentimes a lot of historical data. We see a lot of mainframe customers dealing with historical data that might be siloed away on virtual tape, right? They're sitting on virtual tape libraries. It takes weeks to get access to any one data element. And oftentimes you can't build net new experiences or applications on top of that data just because it is so locked away. By unlocking this historical data, you can actually start to make some more decisions within a historical context. Precisely, we're really focused on what we call data integrity. And data integrity is what we define as data that is accurate, it's consistent, and it's in context. That context could be in location context. It could be historical context. And I think this is a key opportunity to help you make more informed decisions based on trusted data if you've got that historical context. And then lastly, really, it's around there's an opportunity here to get more comprehensive use of your various business entities, right? We, with some of our offerings in the data quality and the data governance space, we're oftentimes working on customer 360 use cases, single view of X type use cases. And oftentimes bringing in this mission critical data can be very useful to successfully delivering on these types of projects. All right, with that said, then what are the challenges, right? All right, this makes sense, the opportunity makes sense. But what are the key challenges? So we've done a couple of surveys in the market, precisely, and these are the key challenges that respondents to our surveys have highlighted. Number one, really getting real time feeds of data. The 451 research has put out some numbers indicating that 37% of organizations have over 100 data silos. Imagine trying to pull data and then close those silos one by one doing full loads of data. Most of our customers are looking to get more real time insights into their different data silos. And so they're trying to implement things like change data capture, right? Leverage facilities like database logs, et cetera, in order to identify changes that are happening to those operational systems and pull them together for BI for building that new applications, et cetera. Another key challenge that we saw was shortage of skills and staff. Specifically, shortage of skills that can span the complexity of an i-series or a mainframe environment, as well as a modern cloud stack built on something like a Databricks, right? That seems to be a key struggle that we see organizations bump up against. Data accessibility, I think ties into this whole idea of eliminating data silos or putting in pipelines that help you deal with data silos. Budget, another key challenge for our customers and the reason why is, I say this often is that, hey, on the integration end, I am focused on building pipes that get data where it needs to be. And to be fair, most organizations are not worried about or not interested in investing their budget on maintenance around pipelines, right? They wanna spend their budget on innovation and actually building value on top of the data. Poor data quality continues to be a challenge, right? Customers don't trust their data. It's either not consistent across systems, it's not accurate or it doesn't necessarily have the fidelity and the context necessary to make decisions on. And then I think the last thing I'll speak to here is scalability, right? How do you integrate this data and build data pipelines that can deal with the scale of your business both today as well as your expected growth tomorrow? So this is the set of challenges that we saw based on this survey that we did. But there's one more challenge that I want to talk about and is why I feel like I can get overlooked a little bit. And that challenge is really cultural challenges. Cultural barriers within organizations are something that we see all the time. And the reason why is as follows, I mean, typically in our experience, organizations may embark on a data strategy where they are now, you know, let's say they're building out a lake house leveraging technology like Databricks, Azure Databricks. Typically, populating the data may start from some of the low hanging fruit, right? Talking to a SaaS endpoint is fairly straightforward, right? And yes, they evolve very quickly, but the data is well-defined, it abysed by a contract that the API specifies and, you know, onboarding it, there's a variety of tooling out there to assist with that. Typically, our customers will start there and then kind of work their way towards more of their mission critical systems. And time and time again, what I see is our customers who work on the main frame side of the house and who've got very large I-Series footprint, they have teams responsible for those systems that are typically siloed from the data teams. They're siloed from the teams responsible for actually onboarding the data into the cloud native platform. And because of that, oftentimes there are limitations and like I said, that governance model around who can install data, what ports can be open, who can actually directly access those platforms and that can actually create some friction that can introduce challenges in the data onboarding process. You know, what we see sometimes is, hey, the main frame team requires that no one touches the data that sits on the main frame and so, you know, they might do batch uploads or batch exports of data assets that the data team needs, you know, they'll do kind of batch exports on a nightly basis, which maybe that's fine for a while, but as organizations try to use data in a more real time or near real time basis, that doesn't quite work, right? A technology like changing a capture might be necessary and in order to implement a successful solution, oftentimes we need to work with customers on helping navigate and solve for any sort of cultural hurdles that they may run across in order to deliver a technically viable solution that meets the SLAs of the business, et cetera. So I think this one oftentimes doesn't get talked at as much but we see it as a real challenge and a barrier towards success. All right, so I'm gonna outline and dive in detail into kind of four key steps that we implement with our customers as it relates to successfully leveraging mission-critical data. And it's four kinds of things here. Excuse me. So number one, right, start small and identify some high-value silos and start to work to eliminate those. Number two, escaping the delay of batching. Batching can be costly, right? Actually moving large amounts of data every single time you need data is expensive to do and it prevents you from building that new applications and using your data in a more proactive fashion as opposed to a reactive fashion. You know, real-time data is only as good as how quickly it was delivered, right? Identifying opportunities to scale up. So this is, you know, one of the things that I see time and time again is once customers get some success with some initial data delivery and business teams can start using it, it actually just creates more demand. There's like a positive feedback loop that creates more demand for more variety, like a greater variety of data. And then the last piece that, again, we kind of help customers with here is finding ways to modernize existing applications. So we'll now jump into each one of these in a little bit more detail, but this is kind of the framework I'm gonna walk through next. All right, so eliminating silos, that was the first one that I mentioned. Key things that we help our customers with when it comes to eliminating silos, right? Try to leverage standard connectivity protocols wherever possible. Seems like a no-brainer, but oftentimes with these mission critical systems, that can be more difficult than you might expect. I mentioned the example before of, you know, more of a manual process where team A does kind of a periodic dump of data from systems that the data team actually requires. Well, typically how we deal with that is we try to leverage whatever standard connectivity protocol may exist, be it, you know, JDBC for some sort of database, SFTP for remote file system, or when it comes to the mainframe specifically, FTPS because SFTP is not fully implemented for the ZOS subsystem and doesn't kind of have all the knobs that you need or even Connector Act. That's the old NDM product. You know, we try to leverage those standard connectivity protocols wherever possible, given they're well understood, can be governed, et cetera. Another key thing, leveraging existing metadata, can be kind of difficult, especially as it relates to the mainframe. Copybooks are great. They provide really, really good metadata, but you know, oftentimes the problem with that is, you know, mainframe data doesn't necessarily tightly map to what a copybook may look like. You may have a numeric field defined in the copybook, but in reality, when you look at the VSAM dataset under the covers, the data itself does not quite match up with the numeric data type that the copybook is telling you. So while leveraging existing metadata is hugely useful and prevents you from having to reinvent the wheel when it comes to data interpretation, and it also promotes reusability, keep in mind that oftentimes you may need more data quality in there to help deal with data that may stick out, right? Ensuring encryption of data in flight at every point in the journey, again, kind of table stakes with these cloud use cases, but again, important. And I think lastly, we see our most successful customers have a flexibility when it comes to how they're delivering their data, right? Several of them may just do a one-way type replication from source to target. Others may broadcast where they want to have one feed of data going into their warehouse and then another feed kind of going straight into some sort of streaming bus for more real-time applications. Some may use a combination, but again, we oftentimes will work with our customers very closely in terms of figuring out, hey, for your use case and the outcomes you're targeting, what is the best data delivery topology or combination of topologies to leverage? And the next thing I would say is replicating data at the speed of business. You know, if you're building a net new application that is looking to improve customer experience, real-time data or near real-time data delivery is becoming, you know, it's important. One of the things that I see working with our customers is a lot of them are starting to treat data availability the way that our HA customer is kind of treat infrastructure availability, right? They, you know, I've got customers that, you know, say 500 milliseconds between event on my operational system and delivery within my target system is too much, right? So definitely seeing more of an increase and more pressure on data delivery speed. So key things to leverage here. Log-based changes to capture techniques are very, very useful, right? They help us identify transactions as they commit or as they occur. We're able to minimize our impact to those online backends and try to use whatever mechanisms may exist, right? If it's DB2 on the mainframe, it's an ISIP 306 interface or it's a VCM data set. It could be CICS controlled or UCICS. We can use journals on the i-series, redo and archive logs in an Oracle environment. But the key point here is kind of leverage what's available to you to identify changes and then perform the data translation conversion off-platform wherever possible. This is really important for those critical environments that are pushing through lots of transactions, right? So if you're talking about a mainframe environment, perform zip-off loading where possible, you know, by doing our conversion off-platform, we decrease the load on those platforms so we don't actually impact the online application. We've actually, you know, we're doing this in our customer environments in certain cases, we've got, you know, customers pushing a billion and a half transactions in a four-hour window, right? So these data volumes are serious and it's one of those things where we actually, we adopt these practices in order to scale to the needs of the business. So with that, I would say, like I mentioned before, successful data delivery drives increased demand. That's what I see time and time again. And it's not a linear journey from one thing to another, right? We see customers enter in at various points in the journey. A lot of our customers, we see start by, you know, saying, hey, I've got this imperative to have more of a real-time insight into my customer behavior. And so I'm going to leverage the change data capture technique we talked about before in conjunction with a platform like Kafka. So I can build a real-time application, right? That then is built off of Kafka. We see other customers say, you know what? I've got this existing warehousing process that I want to modernize. Today I've deployed this in my data center. I'm responsible for allocating, you know, not just software, but also hardware requirements have to maintain it. And it doesn't quite scale to what I need. I can't easily share data. So they may start out their journey with a use case with Snowflake, right? But typically, like I said before, successful data delivery and on-time fast data delivery drives increased demand for more data. So we'll oftentimes then see customers expand to multiple targets, right? So they may adopt like a broadcast technology to send data to both Snowflake and Kafka at the same time. Or they may even leverage the hyperscaler platform to drive a new use case around modernizing existing applications. And actually on that note, what I would say here is they're, you know, once you kind of eliminated these silos, set up these real-time pipelines and feeds of data from the operational system into the cloud platform. There's a couple of things that you can then do. You can actually now, you know, modernize more than just data. There's a couple of forms that this could take, right? Specifically when it comes to the mainframe, one of the things that we often work with customers on is migrating some of their mainframe sort applications, right? So they've got sort applications, they may be moving it into a re-hosting environment built on something like micro-focus. You know, in addition to replicating the data, we may act as kind of the sort accelerator for them under the covers as they migrate those applications, you know? It was written a while back. We don't want to rebuild it. It's not, there's no business case to rewrite it in a, you know, in a more modern language, but it's still really important and I want to just kind of lift and shift it. That's kind of, that's that set of modernization that I would speak to. The other kind of modernization that I would say changing to capture assists with is around rebuilding an application, right? So almost leveraging data integration technologies to serve like an application integration use case. So in this example, what I mean is, let's say you've got application A running on an i-series platform and you want to kind of migrate it to a more modern stack inside of AWS. Well, if the, you know, application generating the data is going to continue running on the i-series, we can actually use change into capture to replicate that data out into AWS into a variety of different endpoints. If it's, you know, Kafka is the one that we see as most popular, but it could be a database just as easily. You know, it provides an opportunity to now, now that you've got a real time feed of data from the generating application on the i-series, you can now rebuild an application running natively on AWS using that real time data feed. That's something that we've done with customers before, you know, they typically will rebuild the application, but by performing that data integration and serving is that data pipeline from source to target, we can help facilitate that process, make it easier. So with that, I actually do want to jump into a customer story that kind of shows this in action. So this is with Sky TV. Sky TV was in a indigital transformation mode, right? There's a C level initiative to kind of consolidate tool sets, move a lot of their operations and analytics into their AWS environment. And Sky had standardized on Snowflake on AWS as being their core platform for analytics on AWS and BI. Now, like I mentioned earlier, they were doing just fine kind of onboarding a lot of their different data assets, but they actually hit a wall when it came to their mission critical data. Now their mission critical data consisted of, you know, business critical assets such as their billing application, subscriber management, chart of accounts, all of that runs on their IVMI. And each one of those had their own file structure. So actually building some sort of ETL process was not going to work for them. They actually had an old legacy ETL process that they built for themselves, you know, it was bespoke, but it was inflexible, right? It took four to 10 hours to run each job, was prone to network issues if there are outages, that this is going to become a worse problem when they were trying to go from ground to cloud, right? So it was just not feasible to kind of take this and re-oriented towards a more cloud-native model built on Snowflake. So what they really wanted was, they wanted something about feed data into Snowflake more automated without having to impact IT, really allow IT to focus more on kind of high value contributions to the business. And so they actually leveraged precisely our connect solution in conjunction with Snowflake to do this. So essentially they deployed connect, leveraged our change data capture technology on the iSeries to actually identify changes that were happening on that operational platform. And then we replicated that out to Snowflake on AWS. What this ultimately allows for them to do is now share with sales a more current view of subscriber behavior, right? They're no longer delayed by four to 10 hours for these ETL processes to run if they even ran successfully. The data delivery is more automated, it's more near real time. And IT is actually freed up now. They're no longer stuck kind of building and maintaining some of these slow running batch ETL processes, right there now, they're no longer rebuilding stuff. They can onboard new data assets in minutes to hours instead of writing new code. And they're no longer typed to batch windows that four to 10 hours now occurs transaction by transaction leveraging connect, right? So they can deliver data now in minutes. And now they can start doing some more innovation on top of Snowflake, building new applications, leaving the IBMI back ends running unchanged. So the IBMI still runs their business, all those business critical systems continue to run there. And the data served to Snowflake and can be opened up to a variety of different lines of business, not just sales. So that silos broken down, it's driving a need for more data from their IT teams. And they're able to deliver on this digital transformation initiative. So I think the, you know, key takeaways I'd stress here, I really do see these mission critical data assets as being critical to helping organizations fulfill the promises of digital modernization. It's great to be able to onboard data from, you know, kind of modern data sources. And it's a great way to start proving out value, but I don't think the promise is fulfilled until you've solved for the complexity and the variety and the velocity of all your different data within the enterprise. I don't think that promise gets fulfilled until you've tackled this last leg of the journey. I think the other thing I would stress here as a takeaway is the challenges that we see organizations face time and time again. Yes, they are technical, but there are also organizational challenges that need to be solved for. Number three, I would say, prove out and get some early wins. Like I said, successful data delivery just drives demand for more data. And typically we're seeing our most successful customers have moved to near real-time data delivery wherever possible. And then I think the last piece here, right, is, you know, once you've unlocked that data, there's an opportunity to intelligently modernize your applications, be it kind of doing a lift and shift with like a mainframe sort card rehosting type application or, you know, rebuilding something that may be more strategic that you want to surface. Right, there are a bunch of different opportunities and, you know, we have precisely would love to chat with you about opportunities to do that, best practices that we've seen work at other customers and more. We do work with, you know, 99 of the Fortune 100 and we've got over 12,000 customers globally. So we've definitely seen a lot of different patterns play out over the years. And we'd love to chat with you on that. So with that, let me turn it over to questions. I think, Shannon, you may be monitoring the thing as well. Yeah, if you have questions for Ashwin, please feel free to submit them in the Q and A portion of your screen. And just a reminder to answer the most commonly asked questions, just a reminder, I will send a follow-up email by end of day Thursday with links to the slides and links to the recording. So diving in here, Ashwin, what were the sources for the growth trends data that you showed? Yeah, so we got some of the data from a couple of BNC reports, the skills information was through a survey that precisely conducted and then some of the market projections, I believe. I have to double check that, I believe it was through markets and markets, but I can double check that. Okay, then I'll give everyone just a moment here to answer, to submit any additional questions that they have. So Ashwin, in this perspective, what's the biggest aha moment that your customers find when they install precisely to solve these problems? That's a good question. I think the biggest aha moment for our customers is, I think how data integration can be an enabler, right? So I think I talked about how data delivery can drive more demand, right? Or success beget success, the positive feedback loop. But I think it's interesting how quickly we'll see customers identify new use cases because they've successfully onboarded this really complex, they successfully onboarded this really complex data asset, they're now doing it at scale and seeing them kind of identify new use cases is probably the most interesting thing once they've deployed us, right? And that could be a new use case in the integration space that involves precisely, that could be a new use case that's completely unrelated to precisely but it's something new they can do with the data. Or it could also be kind of a third category where they actually identify a deficiency in their existing kind of data management, and their kind of data management architecture where they may realize, you know what? We don't quite have the observability into our data that we thought we did. We may need to more closely modify, sorry, may need to more closely monitor kind of our data, the quality of it and figure out how we take corrective action on it. So there are other kind of data integrity challenges that come up that I've seen time and time again after they've successfully deployed us, right? So it's pretty interesting the kind of mix, the mix of reactions we'll typically see. That's amazing. Again, I want to give everyone a little opportunity just to submit any questions in the Q&A portion. Anything you want to add to Ashwin, anything that you felt that you didn't dive into enough or that commonly comes up? Yeah, maybe the other thing I would just mention here is, you know, I think I spoke today about the, about what we do with our customers on on-premises, right? In terms of, you know, in terms of what we do with our on-prem Connect product, we talked about the Sky use case. One of the trends that we're seeing increasingly with our customer base is, you know, as they're moving their warehousing to SaaS offerings, as they're moving a lot of their different, as they're working with kind of a lot of different data, they also want to start moving their integration to more of a SaaS type deployment model. And so that's kind of where our data integrity suite is coming in. So if you've been following precisely or you've heard of us, you may have heard about our data integrity suite and really what it is, it's a set of interoperable modules that help customers deliver on key data integrity initiatives. And so we've actually taken this core integration capability and brought it into our data integrity suite. So, you know, whether you want to deploy these capabilities as software in your environment or whether you want to use our data integrity suite, we can actually help you solve these processes. We can help you solve these processes, you know, in a very holistic fashion based on the needs of your business. You know, we leverage a hybrid architecture so you can kind of maintain ownership of your data as it's moved around, but it's one of those exciting growth areas for us. So it is another thing I just would want to call attention to as well. Perfect, and we do have some questions coming in now. So Ashwin, you know, with respect to cybersecurity, what thoughts on risks that may or can be overlooked? It's a good question. I think I'm going, I think I see this one increasingly coming up with our public cloud use cases. So we've got, you know, a good number of customers who today are, like I said, building a new application in AWS, for example, based on data that's coming from their mainframe backends in the data center. And increasingly what they're trying to do is they're trying to put together an extra layer of security, specifically around application level encryption, right? So the data in-flight on the wire, yeah, it's encrypted while we, you know, replicate it out. But once it lands, they want to be able to protect PII data. So that way, even if someone were inspecting that data, say within Kafka, only those authorized to work with that data could actually decrypt it, detokenize it, et cetera. So that's one of those areas that I do see, you know, we see customer dealing with that today. We're working on some solutions of precisely and putting some thinking around how we can help customers with that kind of application level encryption while maintaining the performance that we're known for. But that's one area that I think is important to keep in mind, right? I think the other thing, just as it relates to data security, is you know, having a good handle on where your data is being processed. So when I talked about our data integrity suite before I mentioned that it is a SaaS offering, but we actually do have more of a hybrid SaaS flexibility with deploying data integration pipelines. So you can actually manage it in your data center or in your private cloud or in your public cloud VPC, right? We don't actually touch your data. So I think that's an important thing to consider as well, right? Because we work with really heavily regulated mainframe environments where you can't have the public cloud just touching the mainframe, right? So that's kind of another area where we see some customer, some good customer feedback from and interesting use cases around. I like it. So how is it that precisely CDC can integrate with operational mainframe data without adversely impacting that performance, that platform's performance? That's a good question. So we typically, I think it's, there's two mechanisms, right? Or there's two key things I would stress. Number one, just kind of leveraging the facilities the mainframe provides us without putting any kind of, you know, mainframes got things like different user exits and whatever that can oftentimes choke performance. So we try to stay away from that wherever possible. That's number one. So that protects the application itself. The other piece that I think I mentioned before on the slide is really being intentional about where the data processing happens. So copy book mapping, abscetic to Unicode translation, you know, data type translations, all that kind of stuff, record parsing, you know, all that's fairly CPU intensive and to do that on the mainframe would definitely drive up your MIPS consumption. So by doing that off-platform, we help kind of reduce the cost of doing this kind of a workload and protecting the mainframe. But, you know, if there is more, you know, desire to go more in depth on it, we're happy to have an in-depth call with our technical team. Love it. And Ashwin, do you work with SAP data like you do mainframe and IBMI? Yeah. So we do have a fairly large SAP business actually. For those who may not be familiar, we actually acquired Wind Shuttle last year and they've got a really compelling set of capabilities in the SAP automation space. And so we do have some interesting integration use cases that we perform with SAP to help customers automate their business processes and things of that nature. You know, we're obviously continuing to work with customers around potentially new use cases to help with analytics and things like that with SAP data, but that's something we're currently working on. From an environmental, social, and our governance perspective, to what degree does precisely facilitate an organization's ability to address any of its components? That's great. So we do not, you know, we don't have an ESG offering, right? But our tools and what we offer from a data integrity perspective does help companies address ESG initiatives. I think specifically as it relates to some of our data, so we precisely as a fairly large data business where we can help provide third-party data around, you know, around a variety of different things, be it dynamic weather, demographics, you know, and a whole host of things, kind of, we acquired a business called Place IQ that works kind of in the, I believe, in the marketing space. So I would say definitely the auxiliary third-party data that we offer can be put towards these kinds of ESG initiatives. And we do have a couple of, you know, field teams who work with our customers on helping facilitate these kinds of initiatives. But I would say our core technology is not an ESG technology. We help support those initiatives through the data integrity messaging, essentially. It's a good question. Yes, indeed. We have done quite a few webinars with precisely on governance and the capabilities of data governance within the tool. Very nice. Well, that's currently all the questions that we have right now. Anything you want to wrap up with, Ashton, before I give some people some time back? Yeah, I think that's all I had. I mean, really hope this session was useful to you all and I appreciate the time and the opportunity to speak with you today. You know, please feel free to reach out to us if you have any challenges or needs in the data integrity space around integration, data governance, data enrichment, LI, data quality and observability. You know, we're happy to help. It's what keeps us getting up for work every day. You know, it's a pretty exciting space to be in. So again, thanks for the time and the opportunity to present to you. Indeed, thank you so much. And thanks to Precisely for sponsoring today's webinar. Again, just a reminder to everybody, I will send a follow-up email by end of day Thursday for this webinar with links to the slides and links to the recording. Ashton, thank you so much. No problem. Thank you, Shannon. Thanks, everybody. Thanks, everyone.