 Hello and welcome. My name is Shannon Kemp and I'm the Chief Digital Manager of DataVersity. We'd like to thank you for joining this DataVersity webinar, Proven Strategies for Hybrid Cloud Computing with Mainframes, sponsored today by CLICK. Just a couple of points to get us started, due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, we will be collecting them by the Q&A in the bottom right-hand corner of your screen. Or if you'd like to tweet, we encourage you to share our questions by Twitter using hashtag DataVersity. And if you'd like to chat with us or with each other, we certainly encourage you to do so. Just click the chat icon in the bottom right-hand corner of your screen for that feature. And as always, we will send a follow-up email within two business days, containing links to the slides, the recording of the session, and additional information requested throughout the webinar. Now, let me introduce to you our speakers for today, Adam Mer and Phil de Valle. Phil is an AWS Principal Solutions Architect and Global Technical Lead for Mainframe and Legacy Modernization. Phil has worked with Mainframes for about 20 years, primarily focusing on modernization initiatives for customers worldwide. In his current role, he advises customers and partners on how to best leverage AWS value proposition for Mainframe and Legacy systems. He contributes to AWS and partner innovations for unleashing the value of legacy assets with AWS. And Adam is responsible for CDC streaming and Mainframe product marketing in addition to delivering clicks, Internet of Things and GDPR go-to-market strategy with a strong technical background in computing standing over 20 years underpinned by an incisive engineering perspective. Adam is an avid follower of new technology and holds a deep fascination for all things IoT, particularly on the data streaming and analytics side. He loves finding new ways to make it as translatable, visual, and understandable to as many people as possible. And with that, I will turn it over to Adam to get us started. Phil, hello and welcome. Excellent. Thank you, Shannon. So data is the lifeblood for every company and getting access to clearer, better, faster insights is now more critical than ever. And here's the fun fact with Mainframes managing almost 90% of all credit card transactions online now. It just goes to show that Mainframes are still very much alive and extremely mission critical for many organizations. But how do you unlock your Mainframes data true potential? So my name is Adam Mayer from CLIC, and I'm going to talk to you through how you can just do that. So it's about modernizing and automating your data integration, and what do we mean by that? So like it says here, it's really about delivering effectively capturing large volumes of data, change data as it changes from a wide range of sources, those heterogeneous sources, and being able to deliver analytics ready data in real time to your cloud platform like AWS. And this is where you can then take advantage of it and do things like cataloging it for enabling discovery and then provisioning it out to the rest of the organization to your analysts and data scientists and beyond. So in terms of doing that, it's about having the most comprehensive platform that offers capturing change data and delivers that ideally as analytics ready data. So then you can publish that out to your BI and data scientists chosen tools. You want to be looking to partner with best in cloud integration solution providers and cloud providers, and you want to be able to do it as easily as possible with as much automation as possible. And that's about partnership with trusted market leaders. And this is what the click data integration platform delivers. It's been architected from the ground up for real time change data capture and analytics ready data delivery. And it's about seamlessly be able to move real time data between those heterogeneous systems, connecting on premise systems with cloud environments like AWS and even moving data between other cloud providers into AWS all under one roof and one platform. And in fact, we've migrated together, we've migrated and integrated more than 200,000 databases and mainframes to the cloud now. And it's a complete and automated solution through initial instantiation, through table target or target table creation, automated mappings and schema synchronization and automatically allowing you to create data warehouses and the data marks and data leaks and being able to catalog those data assets and then publish out to those BI and data sciences tools. And this is all delivered through scale and with scale and stability. And in fact, it's relied on by more than two and a half thousand customers now including half of the Fortune 100. And that's backed up by an ever growing R&D dedicated R&D team that can scale up to embrace the ever changing tech landscape also driven by customer needs. And we also have a wide team in the sales team and professional services that have deep expertise in data integration and analytics. And in terms of what's actually driving the data integration business, there's these free key trends that are driving it. And they're all driven around requiring real time data into and from on-premise systems and cloud platforms. So on the left, we have cloud application development. And this is really about taking legacy applications and taking them through modernization programs and initiatives. So you can build out faster and easier new application development onto the cloud, particularly taking advantages of micro services, architectures. So you can then deliver that much higher scalability and elastic type environments that can scale out based on demand and also take advantage of the infrastructure maintenance cost savings that cloud offers. The middle one is around data warehouse modernization. And this is the drive to reduce the costs that are associated with your legacy EDWs and provide the elasticity again by taking advantage of cloud environments. So it allows you then to meet new business requirements and support more advanced analytics. So data warehouse automation really is an approach to looking to replace your traditional ETL with a much more modern self-service capabilities. And then the last one on the right there is this next generation of analytics and data monetization. So it's driven by the need to want to analyze a much broader set of data structures, unstructured and structured data, and really meeting the ever-changing and growing needs of the organization. So bringing in things like intuitive search across data, leveraging technologies such as artificial intelligence, machine learning, internet of things, and then bringing in decision automation and those kind of things to really start gaining that competitive advantage. And this piece is around managing your data lake creation to stop it changing into that infamous data swamp and looking at your biggest and widest data and be able to process that at scale into more modern platforms, particularly streaming systems as well like Kinesis on AWS. And when we look at our enterprise customers that are wanting to leverage their mainframe data for better analytics and really overall reducing that total cost of ownership, it's these typical common objectives that they all have in common. So it's about being able to or wanting to deliver more and better insights to the business in an agile fashion. It's wanting to have the ability to be able to extract the mainframe data to separate platforms, particularly out on the cloud for better analytics initiatives and really looking to reduce that costly mix charges at the source of the mainframe. And typically this is about supporting cloud migrations and better analytics initiatives on the cloud. But trying to meet those objectives doesn't come without its challenges. So more often than not, it's a case of not being able to afford to increase that costly mix consumption. So lowering the impact off the mainframe and delivering that across low latencies in order to realize the highest possible analytics value, the data has to be copied from those source mainframe systems in near real time. And you want to be able to do that without impacting it. So it's really about platform flexibility because organizations like yourself need to be able to manage and shift those workloads between those multiple legacy systems that you've got and the modern platforms that you're building on and aspiring to. And that requires different levels of skill set. And it's about being able to deliver that in a timely fashion and demonstrating those quick returns on your investment, particularly down those analytics initiatives and doing it as efficiently as possible and looking to reduce the all-time manual and really time-consuming process that's involved in terms of landing, staging and reconciling your data. And this is where the click data integration platform can come in. It's about being able to allow you to modernize, automate and really simplify your data integration with analytics-ready data delivery through streaming data pipelines. So on the left-hand side, we have this wide breadth of heterogeneous sources, including the mainframe that we can cater for and allow you to create those automated streaming data pipelines that is basically capturing the changes at source as and when they occur and then deliver that into your cloud as you need it. And it allows you to really quickly create and deploy analytics-ready structures with automated mapping, automated target table creation and data instantiation, wherever you're looking to land it. So whether that's looking to commit that into a CDC streaming use case to any kind of database and messaging systems like Kinesis or whether you're looking to automatically create those data warehouses and data lakes along with delivering cataloging capabilities that really help to close the last mile in that total visibility of your data landscape. The cataloging allows you to take stock and inventory of all the data that you got and it allows the rest of the organization to be able to search for and retrieve those pre-prepared data sets to be used outside of the, across the organization at scale and it can be consumed by a variety of use cases from any analytical tool of your choice to taking advantage of those technologies like AI and machine learning that are in much demand now and of course the advanced data scientist tools that you have at your disposal. So in terms of click and AWS working better together just to show you a snapshot here that wide breadth of heterogeneous sources that we support and allow you to propagate out into the AWS environment and the ecosystem and take advantage of the many AWS services that are available in real time using best-in-class CDC technology. And just to point out here when you look at the mainframe sources this is also about being able to deliver you a solution that allows you to not just analyze your mainframe data in isolation you can bring in all those other data sources as well. So to put it simply the click data integration platform really allows you to quickly get data into the AWS ecosystem and add value throughout that whole kind of value chain. So you can really use any BI tool of choice for the analytics once you've got the data in AWS but we believe that if you also choose click for data analytics we have a data analytics platform as well and it's completely kind of separate but integrated as well. But if you use that you can not only easily use the data that you've got inside the AWS ecosystem but actually find those actionable insights faster than before. But really for today's talk the key here is about getting data into AWS as quickly and efficiently as possible so then you can take advantage of all those AWS services available to you with the data landed already optimized for the targets that you're choosing there. So that's a great segue then to be to hand over to Phil DeVals from AWS and he can tell you more about how you can get more value out of your mainframe data in AWS. Phil, I'll be to you again. Thank you Adam. So you can switch to the next slide actually I'll start by focusing on the AWS platform itself and then I'll get into some use cases. So AWS has significantly more services and features than any other cloud provider. We have over 175 services but we have infrastructure technologies for compute, storage, databases and we also have technologies for things such as machine learning, artificial intelligence, data analytics, analytics and internet of things and all those building blocks are readily available at your fingertips to test new ideas quickly. With numerous services it's actually faster, easier and more cost effective to build nearly anything you can imagine. Also AWS has a deepest functionality within all those services. For example, AWS offers a widest variety of databases that are purpose built for different types of applications. This way you can choose the right tool for the job to get the best cost and the best performance. On top of this, the AWS marketplace of a thousands of software listings ready to be deployed from ASVs as an example, click replicate is available on the AWS marketplace and can be deployed in minutes. AWS also has the largest and most dynamic community with millions of active customers, tens of thousands of partners globally. We have customers across virtually every industry and of every size, including startups, enterprises and public sector organizations. On the AWS partner network side, it includes thousands of system integrators who specialize in AWS services. It also includes tens of thousands of ASVs who adapt their technology to work on AWS. With AWS, you can leverage the new technologies to experiment and innovate more quickly. We are continually accelerating our place of innovation. For example, in 2014, AWS pioneered the serverless computing space with AWS Lambda. Another example is with Amazon SageMaker, it's a fully managed machine learning service that facilitates using machine learning without any previous experience. Another topic which is important to our enterprise and mainframe customers is our operational expertise. And in this space, AWS has unmatched experience, reliability, security and performance. Customers use AWS for business critical workloads and AWS has been delivering cloud services for over 14 years. So, AWS has the most operational experience at greatest care of any cloud provider. From an infrastructure perspective, AWS has the most extensive global cloud infrastructure. No other cloud provider offers as many regions with multiple availability zones. We have 77 availability zones within 24 geographic regions around the world. And we have plans to add nine more availability zones and three more AWS regions. And so, these are important factors when choosing the right platform for mainframe data and innovation around it. Now we'll see how our customers leverage click replicate to create business value and we'll look into more specific use cases for that. So you can move to the next slide Adam. Thank you. So the first use case I want to talk about is about augmenting mainframes with agile data analytics services on AWS. Mainframe data can include decades of historical business transactions for massive amounts of users. So it's a strong business advantage that customers want to benefit from. So we see customers use big data analytics to unleash mainframe data business value. And we provide services for the full data life cycle from ingestion to processing, storage, analysis, visualization and automation. This use case you see here is also applicable to infrastructure operational analytics. Mainframes are expensive and complex. So customers constantly have to optimize and tune their mainframes to reduce their CPU or misconception. And they want to do this while still meeting the performance objective. So mainframe system management matrix, such as the SMF data can be replicated over to AWS. And then it can be analyzed, virtualized. We can create some alerts and we can actually do some mainframe tuning just by analyzing this data. So in this use case, you can see that click replicate copies mainframe data in real time from either relational, hierarchical or mainframe file-based data stores. And it copies the data over to AWS data lakes, data warehouse or data stores. So on the AWS side, customers can choose Amazon S3 for the data lake, for example. They can also choose Amazon Redshift for the data warehouse. Or they can choose Amazon RDS or Aurora for their managed relational databases. AWS also offers choice for data processing and analysis. For example, we can use Amazon EMR, which is a managed Hadoop framework. We can use Amazon SageMaker as the machine learning models. Or we can use Amazon Kinesis data analytics for streaming data analysis. For visualization and business intelligence, customers can use Amazon QuickSight. Now, going beyond analytics only, some customers want to use mainframe data for more advanced purposes. So let's see this in our next use case. This use case is about augmenting the mainframe with capabilities relying on data replicated to AWS. Because mainframe development cycles with legacy languages are slow and rigid, customers use AWS to build new services quickly. These new services access real-time mainframe data in the local AWS data store. So you can see this is a variation from the previous use case. The local mainframe data is not used for data analytics here, but it's used for new communication channels or for new functionalities for the end users. The new AWS functions augment the mainframe application. For example, we see customers who are creating new channels for mobile or voice-based applications. And they can also develop innovations based on microservices or machine learning. So in this architecture, you can see that click-replicate copies mainframe data in real-time to the AWS managed relational data store. And it shows Amazon Aurora or Amazon RDS. But we can also stream and process the data via Amazon Kinesis. And we use AWS local data source because oftentimes we have this as a strong requirement to avoid latency issues. It's also needed to use a local data store to avoid increasing the mainframe expensive MIPS consumption. Once the mainframe data is in a local AWS data store, then we can create the services quickly on top of it. For example, a new mobile application of voice interface can be added using Amazon API gateway or using Amazon Lex or using Amazon Alexa skills. As far as the business logic is concerned, it can reside in microservices. It can be hosted in AWS Lambda or in containers within Amazon ECS. Then some innovative services can also benefit from the Amazon machine learning service. Now, because data is duplicated between the mainframe and AWS data stores, the data architect that's building this solution needs to be careful about potential data consistency or integrity concerns. So some solutions for this can be to use read-only and read-write patterns or some customers actually choose to use some consistency checks and remediations. We talked about mainframe MIPS, which are expensive, while some customers really focus on reducing costs by reducing their MIPS consumption. And they do this by affloating some processing to AWS. So let's see this in the next use case. Each MIPS on the mainframe can easily cost several thousand dollars every year. We see customers with annual mainframe costs in the tens of millions of dollars and sometimes in the hundreds of millions of dollars per year. So some customers decide to reduce mainframe costs by migrating or affloating very specific workloads to AWS. This use case is not migrating as complete mainframes, but executing ping-point transactions in parallel on AWS. And in this case, click replicate facilitated data movements in between. Because of data replication consistency and latency constraints, this use case does not fit all mainframe workloads. Actually, it's only specific mainframe data workloads that are better suited for affloat. We already mentioned data analytics workloads, but we can add to that other workloads that can be good examples of affloating. We can affloat some mainframe backdrops that create reports, active data or transmit file to partners. We can also affloat specific functions or data access types such as read-only transactions. In this example, we can have a customer that can choose, for example, to keep the read-write transactions on the mainframe while affloating the read-only transactions on the AWS side. On the data side, click replicate takes care of the data movement between the mainframe and AWS. And on the application logic side, the specific functions, the functional behavior, is reproduced using various strategies. So there are different strategies available for that. And depending on the number of lines of code, depending on the timeframe, on the target technology, we would pick one strategy or another. In this architecture, click replicate copies mainframe data in real time to appropriate AWS data store. So if it's relational data, if it's easily in Aurora or DS, for mainframe hierarchical or legacy data file such as index file, then the data is converted via click replicate to the proper AWS data source. For the specific affloated function logic, AWS provides choice of compute services. So those that logic can be deployed on Amazon EC2 or can be deployed container services or even on serverless compute such as AWS Lambda. Now, as soon as critical business transactions are affloated or migrated to AWS, the quality of service becomes very important. So Adam, you can move to the next slide now. Thank you. So it's actually a customer frequently asked questions. What about the quality of service for my enterprise data and applications? Mainframe systems often have stringent non-functional requirements. So when modernizing to AWS, we make sure that we meet or we exceed these requirements. And AWS offers numerous capabilities to execute securely and reliably enterprise applications. I thought by mentioning the AWS well-architected framework, it's a framework that's been developed to help cloud architects build secure, high-performing and resilient infrastructure for the applications. It's based on five pillars, operational excellence, security, reliability, performance efficiency and cost optimization. So this well-architected framework provides a consistent approach for customers and partners to evaluate the architectures. This way, they can implement designs that will scale over time. That means for most quality of service requirements or non-functional requirements, we have AWS services features or capabilities which can satisfy them. Let's look at security, for example. Cloud security is AWS's highest priority. So we've built strong safeguards to help protect customer privacy. AWS is architected to be the most flexible and secure cloud computing environment. Our core infrastructure has built to satisfy the security requirements for the military, for global banks and other high sensitivity organizations. So that security is supported by 230 AWS security services and features. You can see here on this slide, some of the services and features that we have for encryption, confidentiality, identity and access management, key management, auditing, et cetera. Now, on the high availability side, which is a common requirement coming from mainframes, we use a model that's based on AWS regions and availability zone. This model has been actually recognized by Gartner as a recommended approach for running enterprise application that require high availability. It provides regional redundancy with availability zones in separate isolated locations. And actually some AWS services have out-of-the-box features to easily deploy in a cross-AZ or multi-AZ topology. AWS infrastructure also provides for global redundancy and we use AWS regions for that, that also in separate geographic areas. Regarding the scalability topic, we can do both vertical and horizontal scalability. We even take it one level further with elasticity and with elasticity, resources are dynamically adjusted to the load. So the other advantage of this elasticity that it helps reducing the blast radius because we have a higher number of small processing instances. It also helps reducing the cost because on a pay-as-you-go basis, you only pay for the number of instances that are strictly necessary for the current load. For system management, there is a large choice of services available. So you can see we have centralized monitoring, centralized load management, centralized backup, system automation and many more services. We also provide many services and features that are designed to help control and reduce costs. So you can see here some key features such as cost explorer or optimized pricing. Now the last topic I wanna talk about on this slide is the agility aspect. That's a very important aspect for businesses. And we can increase the agility of workloads on AWS on many dimensions. It starts with infrastructure automation, for example, with infrastructure as code. It continues with agile application DevOps relying on AWS code services. For example, we can create CI CD pipeline with code commit, code build, code pipeline, code deployed. And then we reinforce the agility also with the largest choice of services available at your fingertips for building new innovations. So all these capabilities help meeting mainframe workloads requirements. These being said, are now transitioned over to Adam for him to share what customer examples. You're on mute Adam, we don't hear you Adam yet. Can you hear me now? Yes, now we can hear you. Okay, I'm getting double muted on my little speakerphone thing here. Apologies for just kicking in this whole show. Thank you very much for that. There's some really great in-depth insights there from the AWS platform. So yeah, we'll talk about a customer example. But first, I just want to unpack a little bit of the product you were talking about there, the click replicate. So this is part of the data integration platform and the part that we call CDC streaming in terms of this kind of replicate architecture. We'll just have a high level view of this. So typically it's installed as kind of like a middle tier server. And on the diagram here, you can see the data sources on the left side and they could be on-prem and in the cloud. And we have that wide range of end points that we support. Do you remember the list I talked about earlier? So legacy systems, including the main frame, relational databases, warehouses, lakes and flat files. And we allow you to capture data from those sources and automatically apply lightweight transformations and filtering as the data goes in flight before we propagate the data to those wide range of targets depicted on the right side. Which again can be, as well as being on the AWS cloud, could be on-prem as well. And open it up to lots of different formats and conversion into like streaming messaging platforms like Kinesis and the other examples that Phil sort of just walked you through. So with the click replicate product, you can seamlessly replicate data from that full load and manages those batches. And it will automatically switch to capturing and replicating just the changes in the data as and when they occur at the source. And we do that through log-based change data capture. So we apply the transformations as much as we can in memory and provide filtering capabilities on the data as well. So an example is if you were, you know, wanted to bring in 10 years worth of data, but you wanted to actually filter that down to a single year for the most recent transactions or multiple regions, those kinds of things. And this can really help to improve the speed and also be used as a security part as well. For example, by filtering out and help to obfuscate sensitive data such as PII during that transformation process. And just as a point of note, additional transformations can be applied by utilizing the rest of the click data integration platform further upstream, such as the task of automating the creation of those data marks and warehousing in lakes by taking advantage of the other product lines that we have called click compose and click catalog that make up the data integration platform. So we've replicated a lot of flexibility in terms of the data flows that can be configured, such as replicating data from one source and then fanning out to many targets, as in the case of the customers that we're just about to talk about, and that can be vice versa as well. And even sort of migrating data from multiple disparate systems scattered about the place out into the cloud into one or more sources. And you may notice that we got the persistence stored at the bottom. And key points to make here is we don't use this to store the data that's being replicated across. This is used to store out the configuration in the state of the replication task. So for every replication source and target that's configured inside click replicate, it's configured as a single task. And it's this configuration that we store in terms of the type of metadata and the fields and tables that have been selected as well as the transformations that are required as well. And the state that we store is basically the last replicate task in terms of from a source or target state. So we can pick up where we left off in case of any interruptions, whether they're errors or actual manual pauses in the replication tasks. So think of that as kind of like a bookmark kind of functionality. So the key thing here is all of this can be done at scale and we have many customers with running hundreds of tasks in their production systems. So when it comes to the tasks themselves, just wanna take another quick look at kind of high level logical overview here of how the tasks themselves. So again, on the left-hand side, we've got the source databases and the targets on the right. And if you look at the kind of the top layer of that diagram, this is where we're kind of showing that batch data flow. So the batch transfers about taking snapshots from the tables or the tables, sorry, from the source database. And that could be all of those tables or a selection of as you choose. That's all configurable. And then for all of the sources that we support, we've done a lot of work in the backend to optimize the unloading of these. And the transformation and filters occurs as we previously mentioned, just talked about. And then that prepared data is landed in the target source of choice. Again, we've done a lot of work in the backend using Pacific loaders to optimize and ensure that the data is landed correctly and in the proper format for the target. So converting sort of typical relational database into streaming messages, for example. So if you just switch your attention to the bottom part of the diagram, again, going from left and right, this is describing the CDC data flow. So we use the transaction logs to read in the changes of the metadata layer and then apply the same transformations as before and then batch optimize into a streaming data pipeline. So key thing here is that all the tasks are working seamlessly in the background once they're set up and we automatically sync the data for you so you don't have to do this manually. So key takeaways here in terms of replicate and how it works and can help you is the configurations of tasks is all handled by a very simple drag and drop user interface but there's a lot of complex work in the background that helps take the heavy lifting out of those time consuming scripting tasks and processes you would normally have to do traditionally in the old world. And that allows you to free up as particularly as a data architect or data engineer for either a lot of your time so then you can focus on those higher business value projects. Click Replicate also automatically manages the full load and change data capture for you and then from a designer perspective, while the endpoints may look the same as you're configuring them, this like I said, a lot of optimization work has really gone into all of the endpoints that we support both source and targets to ensure that the data is unloaded efficiently at source and optimized and transformed correctly for whatever your chosen target is. So on to the custom example, someone who's really taken advantage of that and pretty much use case two and three that Phil described there is Vanguard. Now Vanguard are an investment company, they manage well over $5 trillion of assets with more than 30 million investors. And part of their challenge is that they wanted to leverage the AWS platform to build out new applications and build out a new cloud platform built on AWS, taking advantage of microservices architectures and deliver the analytical solutions that their businesses really need to have much more better efficient and real-time access to the data that was coming from the large mainframe systems and the data sources that they had there. Typically it was DB2 running on zero S. So the solution they had and have set up is being able to replicate their mainframe data in that near real-time capability directly into the AWS cloud platform. And then they were making it available then to not only the application developers so they could build out these microservices and then deliver that out to the rest of their analytical users as well. And it really helped them to offload the queries at the source mainframe and really helped to reduce the costs and replicate what's selected by Vanguard because of its change data capture technology and the ability to do that kind of fanning out basis from the one to many sources as a quite a popular use case. So the result there is that by utilizing the CDC streaming part of the data integration platform, it helps to manage Vanguard's really diverse workloads as Phil described and the huge data volumes that they're dealing with. Typically on average, on an hourly basis, we're helping to move over 20 million rows of data and at peak times that's in excess of 60 million rows of data on an hourly basis. And for all of this, by delivering a much better efficient kind of pipeline, Vanguard's adoption of the cloud data platform that they built using these microservices has increased 200% year on year and growing and also taken advantage of the AWS platform. They've managed to reduce their build and compute costs quite significantly by about 30%. Now, unfortunately we don't have time to really unpack this in more detail but some really great videos available online where Vanguard were presented at the AWS re-invent keynotes last year and we'll send you the links to these videos after this webinar, but there's a section there, there's a great eight minute overview of Jeff Dows who's the IT executive at Vanguard and you can see him there with the kind of punchline slide on the benefits that they've achieved and it's a brilliant, really great 20 minute technical deep dive session from Darwin Stockton, who is the platform owner and cloud data as a service owner at Vanguard. So it's a great example and I encourage you to take some time out to watch those videos and we'll get the links. So Vanguard is one of many customers taking advantage of real-time replication from their mainframe data sources and the quick QDI platform is really here to help you make the best of that. So it allows you to safely extract the mainframe data for external analytics, flexibly integrating with any major analytics platform or BI tour of choice. We do that through efficient log-based CDC without impacting production performance at the sources and it ensures that your analytics targets always stay kind with the very latest data schema and meta updates in real-time. So this all helps to improve the efficiency and reduce your costs through automated creation and updates with those analytics ready data sets. Real-time CDC really helps to eliminate the need for the constant full load and helps to reduce those costly MIPS consumption costs and really allows you to execute your data and analytics projects and initiatives on cross-fiber cloud for more rapidly faster ROI. This is all underpinned by that universal integration with all your key mainframe sources and other heterogeneous endpoints to really allow you to replicate your mainframe data out into the AWS cloud, trust your mainframe with click and AWS exactly like Vanguard and many other customers do. So that just leaves me to kind of wrap it up there. If you'd like to learn more, please visit click.com. You can read more about the data integration platform itself. We're proud to be an advanced technology partner within the AWS partner network and we also have a dedicated page on this and I've got the link there for you so you can read more about the partnership and you can even sign up for a free trial and take, click, replicate for a test drive yourself and see how easy it is to manage your data sources and deliver those real-time data streaming pipelines. So I'll leave it there. There's some useful links here on the screen to the website, but also the ability to contact, click if you need to and that's also dedicated mainframe address for Amazon, mainframe at amazon.com. Please get in touch, we'd love to talk to you and see how we can help you become even better than you already are and hear about your use cases. So I'll leave it there and open it up to any questions if there are any. Adam and Phil, thank you so much for the fantastic presentation and just to answer the most commonly asked questions, just a reminder, I will send a follow-up email to all registrants by end of day Thursday for this webinar with links to the slides and the recording and anything else requested throughout. I'll make sure and get that contact and go out into the email as well. So, and if you have questions for our fellow or Adam, please submit them in the bottom right-hand corner of your screen in the Q&A section. So diving in here, how does data quality work with the data going from the heterogeneous sources to the platform? Okay, so yes, from a click perspective, there's a couple of things we can do. So you've got the transformation process as in terms of data type conversions that you can use as a bit of data quality. Depending on what's meant by data quality, we can do things like with the mainframe copy book. We can take the numerical kind of field that's representing a date from, say, an IMF data source, and then we can allow you to transform that to a true date data type field through Replicate. So therefore, we can allow you to more easily manage your data sources once it's copied out and replicated out in the cloud and become more efficient that way. Another part that we can help with data quality is I mentioned the catalog functionality. So this has some form of data prep in terms of the onboarding process and we have integration in with the data integration platform and has some capacity to improve or master data quality as well. So particularly around data governance, who's got access to the data, what type of data needs to be seen by the right people, those kinds of stuff can also be covered by the catalog as part of that onboarding process as you make once you move the data into AWS and you wanna make that more available to the organization in as well as improving the quality of the data to fix our errors and clean it up and normalize it a little bit and then push it out to the rest of the organization so they can find that and use it as an analytics business ready form. And still anything you wanna add to that from an AWS perspective? What I'll add is that it also depends on the use cases that we're dealing with. If we're talking about an offload use case, for example, then we would pay attention to have strict functional equivalence between source and target and we would actually check the data quality going through the functions themselves by doing some consistent functional equivalence testing across the two platforms. Now, if we're dealing with new innovations, new use cases, et cetera, then I would say the data format, the data organization, et cetera, can be quite different from the mainframe side itself. So that's where it becomes more tricky to actually check the source and target and do some equivalence testing. Now, I would think that the click replicate product itself can actually do some checksums on the data so that they make sure that it's actually in a proper consistent format while it's traveling across the network flows and making sure that the data is still the proper data when it arrives in the target data source. Yes, that's definitely handled through the configuration side of things. So you decide, and they're like that example, I was trying to sort of articulate around the date field and then transferring that, transforming that into a proper date field once it lands into your target, for sure. Perfect, and for best practices, one need to install click in in-house environment or should it only be in AWS and watching a bit on connecting to mainframe and transferring data to AWS. Yeah, so from the first part, connecting to the mainframe, so the way click replicate works in the most part, we talk about agent list and minimal impact on the mainframe. But there is some configuration on mainframe. We try to keep that as lightweight as possible. We have a team of expertise to help you do that. But it gets as close to, if you can install it as close to the source database as possible, then we can help lift that from that on-prem in-house environment and then push that out into any target choice. So like I said, it could go from on-prem to on-prem. You can go through several hops before it goes out into the cloud. Yeah, what I would add to this is that there is certainly value in having it running on the AWS side and there are some services that are built and readily available so that we can handle the failover nicely across the variety zones. And for example, we could use EFS shared file systems or we could use Amazon FSX for Luster so that the failover can happen seamlessly and have an active passive setup and have the passive becomes active seamlessly for keeping the data replication going and flowing between the source and the target. So indeed, I mean, it can be done either on-prem or on AWS. Actually, maybe the final decision will be more geared towards where the sources and targets are. But if there is an AWS target, there is certainly some value in making it across EC2 and across the variety zones because it makes it very manageable. Definitely. And just to build on that as well, Phil, I mean, it really does depend on the volume data that you've got to move and replicate. We do have customers running, replicate purely in a cloud environment and then extracting from on-prem sources. But I guess it is use case-dependent and volume-dependent as well. What works best for that environment and that customer? And our attendees are being a little quiet today. If you've got questions, feel free to submit them in the bottom right-hand corner. I'll give you a couple of minutes to add something in there. Phil and Adam, anything that you haven't addressed in terms of what the most common issue is and where you see the biggest relief from your customers? I think from a click side, it's about the ability to work with that wide breadth of data sources, those heterogeneous sources. So it's typically not just the mainframe data that they have challenges in getting kind of data delivery piece in. So we can help them with that wide piece there. And the ability to do that with kind of minimal impact and real flexibility in terms of the kind of tasks and configurations that you want to set up. So you can deliver those kind of one-to-many set-ups and vice-versa. So it's that flexibility for both on-prem and cloud environments. That real sort of hybrid use cases we started off with. Yeah, I think to add to that, Phil. Yeah, I mean, from a customer perspective, I mean, we know sometimes it's very difficult for customers to get access to the mainframe data sets. Especially if it's DB2 on Z, there are ways of doing it, but it's not as straightforward. If it's VSAM, it's actually even more difficult. And so by virtue of having Attunity Deployed, Attunity Click Replicate, it's actually goes much faster because you install the product, you start having data flowing and streaming, and that can happen in matter of days or possibly weeks maximum. And then as soon as the data is flowing over to a native data source, then you have broad access to the data. And that's where the innovation comes in and kicks in because then all sorts of different use cases are now readily available for customers to use. And it's not just one line of business that may benefit from it, but then you can open it actually to the data to lots of different lines of businesses. And if I go back to the Vanguard example, one way that they've been able to leverage the data also is by pushing it across regions so that the data has close proximity to their users so that they can improve performance and so that they can even read offload some of the load from the mainframe and from the infrastructure for them all. So as soon as the data becomes available, it's really unleashing the data from the mainframe to AWS and then all sorts of new use cases. The users will come up with a use cases that are more sputnik for their business. Definitely, yeah. Really does spurn innovation across the business. We have a question here, requesting some documentation and for click to do a bigger deep dive and we'll definitely work to get that to you. We will have less information there. And Adam, we get a lot of questions. I see that you specialize in GDPR. Can you talk a little bit and how this, I know a lot of companies are still struggling with implementation to incompliance and what we should across the world. How are these clicks helping with an AWS helping in the governance of compliance? Yeah, I mean, compliance and governance is quite a strong thing. So we, from the replicate perspective, it's generally installed within the kind of hardening environment there, but we do have features across the board across the platform that do help with governance. So that's accessibility is one thing. So kind of like the next stage is the catalog product that I mentioned. So that allows, we talk about kind of democratization of data and allow greater and more access across the organization. We need to do that in secure and governed ways to make sure that only the right people get access to the right data. So there's, we can integrate in with IDP and access groups that are already set up or you can create them from scratch if you have to. But that can help us as one part of the governance. Inside the catalog itself, we have some useful features, particularly around data stewards. So it's not only about ensuring that the kind of quality the data once you're bringing it on board in, you can then choose to, I mentioned about obfuscating fields. We've got 11 of that in the kind of lightweight transformations from replicate, but you can extend that out further in replicate and really start to target and identify fields that will contain your most sensitive information to make sure that's handled correctly and only the right people can kind of get access to it. So it speeds up ease of use of, knowing that you're kind of using your data in a governed way and then people can use it safely without exposing any sensitive data. Whether that's just a simple kind of name and address or credit card details, social security numbers or sensitive, more sensitive records than that. So you can manage it, there's controls in there to manage it at a governance level. And even if you push that all the way through from the data analytics side of touch loosely, that we also have a data analytics platform. And that really has some really strong governance controls in there to particularly around when you start bringing in self-service, that could be another nightmare that's valuable to the business, but if you don't do it with the right controls and also without stifling usability as well, the data analytics platform really gives that balance between delivering the security and flexibility that IT needs, but the agility that the business needs as well. So we can carry on that kind of access control pretty much down to row level and control who's, you can actually set up who kind of gets access to what analytics dashboard down to kind of row level in some cases as well. The state of governance can be pushed all the way through that. The other challenge that organizations have is that GDPR is one key thing that are different sort of rules and regulations across the world. But using those kind of governance and controls that you have in place can help you build out your governance strategies and keep your organization safe and secure as well. So we don't push our products as GDPR compliant, but we can help you in your compliance strategies as well and in quite many different use cases. The analytics side is quite interesting because then you can start like what we're saying once you get the data out and kind of free it but in terms of making it more accessible and do more things with it, you can start to analyze the data and start seeing how old the data is, who's got access to it, who's doing what with it and those kind of things and make sure that you're compliant so you don't keep data for too long. That's one thing, it's not just about data breaches, it's about how you're using the data and how long you're keeping it for. It's an ever interesting use case that I've come across recently. And yeah, the analytics side can really help you on that as well. Sorry, that was a very long answer. No, it's very good. It's a very important question and answer. So anything you want to add to that? Adam, this is a great response for it. Thank you. And I mean, the only thing I would say is that as far as compliance is concerned, on the AWS side, we have lots of guidance that's available. We have services that are compliant, specific and we have a list of services that are compliant for each of the regulations. So we can definitely assist with that. So if there are specific questions for specific service or specific use cases, we'll be glad to help. Thank you. So, Adam, this question is for you. How differently does the Trinity work after being a part of CLIC? Were there enhancements made in the last six to eight months? And that's... Yeah, yeah, sure. So obviously, Trinity was acquired by CLIC a couple of years ago now, almost two years ago. And it's really brought about that full data integration platform. So we've integrated in with other products from previous acquisitions. Podium delivered the catalog. And it's about trying to bring those products that's closer together, making it more kind of cohesive under one roof. So that's a development side going on as well. And it's also kind of bridged together the full end-to-end platform now. So really, with our only vendor that can give that full end-to-end kind of solution from taking your raw data, turning that into analytics ready data, and then finding the insights, analyzing it and finding those actual insights quicker than you could do before. So it's brought that all together. We've kept one differentiator. We've kept the kind of... We've always been data agnostic in terms of the wide breadth of data sources that we use. But the data integration side of things is also BI tool agnostic. So we've kept that as well, because that's important. We know there's more than one analytics tool. So we are a data company now. We're not just a data analytics company as it could have been seen a couple of years ago. But in terms of the attunement side, it's really strengthening that data integration story and developing it further, bringing in more endpoints. And also joining up that end-to-end story. And the key thing is we kept the real core of any product is the R&D team are behind it. And we've kept that entire team and a lot of highly skilled operational and sales and marketing teams as well. And that's all been integrated into CLIC. So one company, one team, one family. And we believe we can add value to our customers at whatever stage in the data journey that they want to add value to. Perfect, I love it. Well, that does bring us very close to the top of the hour here. I just want to say thank you both Adam and Phil for this great presentation again. And thanks to CLIC for sponsoring today's webinar. Again, I will send a follow-up email by end of a Thursday to all registrants with links to the slides and links to the recording of this session along with the links here at communication. And we'll get that link up to you with the links to deep dive into CLICs further. So again, hope you all have a great day. Thank you very much. Thanks all our attendees for attending. I hope you enjoy and stay safe out there. Adam and Phil, thanks so much. Thank you. Awesome, thank you. And thank you Phil. Thank you, Shannon. Cheers. Thanks, everyone.