 From around the globe, it's theCUBE with digital coverage of enterprise data automation and event series brought to you by IOTAHO. Welcome everybody to Enterprise Data Automation, a co-created digital program on theCUBE with support from IOTAHO. My name is Dave Vellante and today we're using the hashtag data automated. You know, organizations they really struggle to get more value out of their data. Time to data-driven insights that drive cost savings or new revenue opportunities, they simply take too long. So today we're going to talk about how organizations can streamline their data operations through automation, machine intelligence and really simplifying data migrations to the cloud. We'll be talking to technologists, visionaries, hands-on practitioners and experts that are not just talking about streamlining their data pipelines, they're actually doing it. So keep it right there. We're going to be back shortly with AJ Vahora who's the CEO of IOTAHO to kick off the program. You're watching theCUBE, the leader in digital global coverage. We'll be right back right at this short break. Innovation, impact, influence. Welcome to theCUBE, disruptors, developers and practitioners, learn from the voices of leaders who share their personal insights from the hottest digital events around the globe. Enjoy the best this community has to offer on theCUBE, your global leader in high-tech digital coverage. From around the globe, it's theCUBE with digital coverage of enterprise data automation and event series brought to you by IOTAHO. Okay, we're back. Welcome back to Data Automated. AJ Vahora is CEO of IOTAHO. AJ, good to see you. How are things in London? Things are doing well. Things are doing well. Customers that I speak to day in, day out that we partner with. They're busy adapting their businesses to serve their customers. It's very much a game of ensuring that we can serve our customers to help their customers. And the adaptation that's happening here is trying to be more agile, trying to be more flexible. And there's a lot of pressure on data, a lot of demand on data to deliver more value to the business to serve that customer. As I said, we've been talking about data ops a lot, the idea being DevOps applied to the data pipeline. But talk about enterprise data automation. What is it to you and how is it different from data ops? DevOps has been great for breaking down those silos between different roles, functions and bringing people together to collaborate. And we definitely see that those tools, those methodologies, those processes, that kind of thinking, lending itself to data with data ops is exciting. What we look to do is build on top of that with data automation. It's the nuts and bolts of the algorithms, the models behind machine learning, the functions. That's where we invest our R&D and bringing that into build on top of the methods, the ways of thinking that break down those silos and injecting that automation into the business processes that are gonna drive a business to serve its customer. It's a layer beyond DevOps, data ops. They can get to that point where where I like to think about it is, is the automation behind the automation. We've come a long way in the last three or four years. We started out with automating some of those simple to codify, but have a high impact on an organization across the data lake, across the data warehouse. Those data related tasks that help classify data. And a lot of our original patterns and IP portfolio that we've built up is very much around that. I'd love to get into the tech a little bit in terms of how it works. And I think we have a graphic here that gets into that a little bit. So guys, if you'd bring that up. Sure. I mean, right there in the middle that the heart of what we do, it is the intellectual property that we've built up over time that takes from heterogeneous data sources, your Oracle relational database, your mainframe, your data lake and increasingly APIs and devices that produce data. And it creates the ability to automatically discover that data, classify that data. After it's classified, then have the ability to form relationship across those different source systems, silos, different lines of business. And once we've automated that, then we can start to do some cool things such as put some context and meaning around that data. So it's moving it now from being data driven and increasingly, well, we have really smart, bright people in our customer organizations who wanna do some of those advanced knowledge tasks, data scientists and quants in some of the banks that we work with. The owners is on then putting everything we've done there with automation, pacifying it, relationship, understanding data quality, the policies that you can apply to that data and putting it in context. Once you've got the ability to empower a professional who's using data to be able to put that data in context and search across the entire enterprise estate, then they can start to do some exciting things and piece together the tapestry, the fabric across their different system. Could be CRM, ELP systems such as SAP and some of the newer cloud databases that we work with, Snowflake is a great one. If I look back maybe five years ago, we had a prevalence of data late technologies at the cutting edge and those have started to converge and move to some of the cloud platforms that we work with such as Google and AWS. And I think very much as you've said it, those manual attempts to try and grasp what is such a complex challenge at scale quickly runs out of steam. Because once you've got your fingers on the details of what's in your data estate, it's changed. You've onboarded a new customer, you've signed up a new partner, a customer has adopted a new product that you've just launched and that slew of data keeps coming. So it's keeping pace with that. The only answer really is some form of automation. You're working with AWS, you're working with Google, you got Red Hat, IBM as partners. What is attracting those folks to your ecosystem and give us your thoughts on the importance of ecosystem? That's fundamental. I mean, when I came in sweet, Biotaho here is the CEO, one of the trends that I wanted us to be part of was being open, having an open architecture that allowed one thing that was close to my heart, which was as a CEO, a CIO where you've got a budget vision and you've already made investments into your organization. And some of those are pretty long-term bets. They could be going out five, 10 years sometimes with a CRM system, training up your people, getting everybody working together around a common business platform. What I wanted to ensure is that we could openly plug in using APIs that were available to a lot of that some investment and the cost that has already gone into managing an organization's IT for business users to perform. So part of the reason why we've been able to be successful with some of our partners like Google, AWS and increasingly a number of technology players such as Red Hat, MongoDB's, another one, we're doing a lot of good work with, and Snowflake here is it's, those investments have been made by the organizations that are our customers. And we want to make sure we're adding to that and they're leveraging the value that they've already committed to. Maybe you could give us some examples of the ROI and the business impact. Yeah, I mean, the ROI David is built upon three things that I've mentioned. It's a combination of leveraging the existing investment with the existing estate, whether that's on Microsoft Azure or AWS or Google, IBM and putting that to work because the customers that we work with have made those choices. On top of that, it's ensuring that we have got the automation that is working right down to the level of data at a column level or at a file level. So we don't deal with metadata. It's being very specific to be at the most granular level. So as we run our processes and the automation, classification, tagging, applying policies from across different compliance and regulatory needs that an organization has to the data. Everything that then happens downstream from that is ready to serve a business outcome. Now with our Tahoe being able to run those processes within hours of getting started and build that picture, visualize that picture and bring it to life. You know, the ROI starts off the back with finding data that should have been deleted, data that there's copies of and being able to allow the architect, whether it's we are working on GCP or in migration to any other clouds such as AWS or a multi-cloud landscape quite often now. AJ Vohorat, thanks so much for coming on theCUBE and sharing your insights and your experiences. Great to have you. Thank you, David. Look forward to talking again. Now we want to bring in the customer perspective. We have a great conversation with Paula D'Amico, the Senior Vice President Data Architecture at Webster Bank, so keep it right there. IOTAHO, data automated. Improve efficiency, drive down costs and make your enterprise data work for you. We're on a mission to enable our customers to automate their management of data to realize maximum strategic and operational benefits. We envisage a world where data users consume accurate, up-to-date, unified data distilled from many silos to deliver transformational outcomes. Activate your data and avoid manual processing. Accelerate data projects by enabling non-IT resources and data experts to consolidate, categorize and master data. Automate your data operations. Power digital transformations by automating a significant portion of data management through human-guided machine learning. Get value from the start. Increase the velocity of business outcomes with complete, accurate data curated automatically for data visualization tools and analytic insights. Improve the security and quality of your data. Data automation improves security by reducing the number of individuals who have access to sensitive data. And it can improve quality. Many companies report double-digit error reduction in data entry and other repetitive tasks. Trust the way data works for you. Data automation by IAO Tahoe learns as it works and can augment business user behavior. It learns from exception handling and scales up or down as needed to prevent system or application overloads or crashes. It also allows for innate knowledge to be socialized rather than individualized. No longer will your company struggle when the employee who knows how this report is done retires or takes another job. The work continues on without the need for detailed information transfer. Continue supporting the digital shift. Perhaps most importantly, data automation allows companies to begin making moves towards a broader, more aspirational transformation. But on a small scale that is easy to implement and manage and delivers quick wins. Digital is the buzzword of the day, but many companies recognize that it is a complex strategy that requires time and investment. Once you get started with data automation, the digital transformation initiated and leaders and employees alike become more eager to invest time and effort in a broader digital transformational agenda. Hi, buddy, we're back and this is Dave Vellante and we're covering the whole notion of automating data in the enterprise. And I'm really excited to have Paul D'Amico here. She's a Senior Vice President of Enterprise Data Architecture at Webster Bank. Paul, good to see you. Thanks for coming on. Hi, nice to see you too. How are you? Yeah, so let's start with Webster Bank. You guys are kind of a regional, I think New York, New England, believe of headquartered out of Connecticut, but tell us a little bit about the bank. Yep. Webster Bank is regional Boston, Connecticut and New York. Very focused on in Westchester and Fairfield County. They are a really highly rated bank, regional bank for this area. They hold quite a few awards for the area for being supportive for the community and are really moving forward technology-wise. Currently today, we have a small group that is just working toward moving into a more futuristic, more data-driven data warehousing. That's our first item. And then the other item is, is to drive new revenue by anticipating what customers do when they go to the bank or when they log into their account to be able to give them the best offer. And the only way to do that is if you have timely, accurate, complete data on the customer and what's really a great value or something to offer them. At the top level, what are some of the key business drivers that are catalyzing your desire for change? The ability to give a customer what they need at the time when they need it. And what I mean by that is that we have customer interactions in multiple ways, right? And I want to be able for the customer to walk into a bank or online and see the same format and being able to have the same feel, the same look, and also to be able to offer them the next best offer for them. Part of it is really the cycle time, the end-to-end cycle time that you're pressing. And then there's, if I understand it, the residual benefits that are pretty substantial from a revenue opportunity. Exactly, it's drive new customers to new opportunities. It's enhance the risk and it's to optimize the banking process and then obviously to create new business. And the only way we're going to be able to do that is if we have the ability to look at the data right when the customer walks in the door or right when they open up their app. Do you see the potential to increase the data sources and hence the quality of the data or is that sort of premature? Oh no, exactly right. So right now we ingest a lot of flat files and from our mainframe type of front-end system that we've had for quite a few years. But now that we're moving to the cloud and off-prem and on-prem, moving off-prem into like an S3 bucket where that data can, we can process that data and get that data faster by using real-time tools to move that data into a place where like Snowflake could utilize that data or we can give it out to our market. The data scientists are out in the lines of business right now, which is great because I think that's where data science belongs. We should give them, and that's what we're working towards now is giving them more self-service, giving them the ability to access the data in a more robust way and it's a single source of truth. So they're not pulling the data down into their own like Tableau dashboards and then pushing the data back out. I have data engineers, data architects, database administrators, right? And then traditional data reporting people and because some customers that I have that are business customers, lines of business, they want to just subscribe to a report. They don't want to go out and do any data science work. And we still have to provide that. So we still want to provide them some kind of, you know, regiment that they wake up in the morning and they open up their email and there's the report that they subscribe to, which is great and it works out really well. And one of the things is, is why we purchase IOTAO was I would have the ability to give the lines of business the ability to do search within the data and read the data flows and data redundancy and things like that and help me clean up the data. And also to give it to the data analysts who say, all right, they just asked me, they want this certain report and it used to take, okay, well, we're going to four weeks, we're going to go and we're going to look at the data and then we'll come back and tell you what we can do. But now with IOTAO, they're able to look at the data and then in one or two days, they'll be able to go back and say, yes, we have the data. This is where it is. This is where we found it. This is the data flows that we found also, which is what I call it is the birth of a column. It's where the column was created and where it went to live as a teenager and then it went to, you know, die where we archived it. In researching IOTAO, it seems like one of the strengths of their platform is the ability to visualize data, the data structure and actually dig into it, but also see it and that speeds things up and gives everybody additional confidence. And then the other piece is essentially infusing AI or machine intelligence into the data pipeline is really how you're attacking automation, right? Exactly. So you're able to, let's say that I have seven cause lines of business that are asking me questions. And one of the questions they'll ask me is we want to know if this customer is okay to contact, right? And you know, there's different avenues. So you can go online to go, do not contact me. You can go to the bank and you could say, I don't want email, but I'll take texts and I want, you know, no phone calls. All that information. So seven different lines of business ask me that question in different ways. One said, you know, okay to contact. The other one says, you know, customer one, two, three, all these, you know, and each project before I got there used to be siloed. So one customer would be a hundred hours for them to do that analytical work and then another analyst would do another hundred hours on the other project. Well, now I can do that all at once. And I can do those type of searches and say, yes, we already have that documentation. Here it is. And this is where you can find where the customer has said, you know, no, you don't want, I don't want to get access from you by email or I've subscribed to get emails from you. I'm using IoTahou's data automation right now to bring in the data and to start analyzing the data flows to make sure that I'm not missing anything and that I'm not bringing over redundant data. The data warehouse that I'm working off of is not a, it's an on-prem, it's an Oracle database and it's 15 years old. So it has extra data in it. It has things that we don't need anymore and IoTahou is helping me shake out that extra data that does not need to be moved into my S3. So it's saving me money when I'm moving from off-prem to on-prem. What's your vision or your data-driven organization? I want for the bankers to be able to walk around with an iPad in their hand and be able to access data for that customer really fast and be able to give them the best deal that they can get. I want Webster to be right there on top with being able to add new customers and to be able to serve our existing customers who had bank accounts since they were 12 years old there and now are multi, whatever. I want them to be able to have the best experience with our bankers. That's really what I want as a banking customer. I want my bank to know who I am, anticipate my needs and create a great experience for me and then let me go on with my life. And so Paula, great story. Love your experience, your background and your knowledge. Can't thank you enough for coming on theCUBE. No, thank you very much and you guys have a great day. Next, we'll talk with Lester Waters who's the CTO of IOTAHO. Lester takes us through the key considerations of moving to the cloud. IOTAHO platform, automated data discovery. Data discovery is the first step to knowing your data. Auto discovered data across any application on any infrastructure and identify all unknown data relationships across the entire siloed data landscape. Smart data catalog, know how everything is connected, understand everything in context. Regain ownership and trust in your data and maintain a single source of truth across cloud platforms, SaaS applications, reference data and legacy systems. Empower business users to quickly discover and understand the data that matters to them with a smart data catalog continuously updated ensuring business teams always have access to the most trusted data available. Automated data mapping and linking. Automate the identification of unknown relationships within and across data silos throughout the organization. Build your business glossary automatically using in-house common business terms, vocabulary and definitions. Discovered relationships appear as connections or dependencies between data entities such as customer, account, address, invoice and these data entities have many discoverable properties at a granular level. Data signals dashboards, get up to date feeds on the health of your data for faster improved data management. See trends, view full history, compare versions and get accurate and timely visual insights from across the organization. Automated data flows, automatically capture every data flow to locate all the dependencies across systems. Visualize how they work together collectively and know who within your organization has access to data. Understand the source and destination for all your business data with comprehensive data lineage constructed automatically during the data discovery phase and continuously load results into the smart data catalog. Active DQ. Automated data quality assessments powered by Active DQ ensure data is fit for consumption and meets the needs of enterprise data users. Keep information about the current data quality state readily available for faster improved decision making. Data Policy Governor. Automate data governance end to end over the entire data lifecycle with automation, instant transparency and control. Automate data policy assessments with glossaries, metadata and policies for sensitive data discovery that automatically tag, link and annotate with metadata to provide enterprise-wide search for all lines of business. Self-service knowledge graph. Digitize and search your enterprise knowledge. Turn multiple silo data sources into machine understandable knowledge. From a single data canvas, search and explore data content across systems including ERP, CRM, billing systems, social media to fuel data pipelines. Focusing on enterprise data automation, we're going to talk about the journey to the cloud. Remember the hashtag is data automated and we're here with Lester Waters, who's the CTO of IHO Tahoe. Hey, give us a little background. CTO, you've got a deep, deep expertise in a lot of different areas, but what do we need to know? Well, David, I started my career basically at Microsoft where I started the information security cryptography group there, the very first one that the company had. And that led to a career in information security. And of course, as you go along with information security data is the key element to be protected. So I always had my hands in data and that naturally progressed into a role with IHO Tahoe as their CTO. What's the prescription for that automation journey and simplifying that migration to the cloud? Well, I think the first thing is understanding what you got. So discover and cataloging your data and your applications. I don't know what I have, I can't move it. I can't improve it. I can't build upon it. And I have to understand those dependencies. So building that data catalog is the very first step. What do I got? Okay, so we've done the audit. We know what we've got, what's next? Where do we go next? So the next thing is remediating that data. Where do I have duplicate data? I may have, you know, oftentimes in an organization data will get duplicated. So somebody will take a snapshot of a data, you know, and then end up building a new application which suddenly becomes dependent on that data. So it's not uncommon for an organization of 20 master instances of a customer. And you can see where that will go when trying to keep all that stuff in sync becomes a nightmare all by itself. So you wanna sort of understand where all your redundant data is. So when you go to the cloud, maybe you have an opportunity here to consolidate that data. Then what, you figure out what to get rid of or actually get rid of it, what's next? Yes, yes, that would be the next step. So figure out what you need and what you don't need. Oftentimes I've found that there's obsolete columns of data in your databases that you just don't need. Or maybe it's been superseded by another, you've got tables that have been superseded by other tables in your database. So you got to kind of understand what's being used and what's not. And then from that, you can decide, I'm going to leave this stuff behind or I'm going to archive this stuff because I might need it for data retention or I'm just going to delete it. I don't, you know, you don't need it at all. We're plowing through your steps here. What's next on the journey? The next one is in a nutshell, preserve your data format. Don't, don't, don't, don't boil the ocean here to use a cliche. You know, you want to do a certain degree of lift and shift because you've got application dependencies on that data and the data format, the tables in which they sit, the columns and the way they're named. So some degree you are going to be doing a lift and shift but it's an intelligent lift and shift. The data lives in silos. So, you know, how do you kind of deal with that problem? Is that part of the journey? That's, that's a great point to it because you're right. The data silos happen because, you know, this business unit is charted with this task and other business unit has this task and that's how you get those instantiations of the same data occurring in multiple places. So you really want to, as part of your cloud migration journey, you really want to plan where there's an opportunity to consolidate your data because that means it'll be less to manage. It'll be less data to secure and it'll be, it'll have a smaller footprint which means reduced costs. So maybe you could address data quality. Where does that fit in on the journey? That's a very important point. You know, first of all, you don't want to bring your legacy issues with you as the point I made earlier. If you've got data quality issues, this is a good time to find those and identify and remediate them but that can be a laborious task and you could probably accomplish it but it'll take a lot of work. So the opportunity to use tools here and automate that process is really will help you find those outliers. What's next? I think we were through, I think I've counted six. What's the, what's the lucky seven? Lucky seven. Involve your business users. Really, when you think about it, your data is in silos. Part of this migration in cloud is an opportunity to break down these silos, these silos that naturally occur as part of the business unit. You've got to break these cultural barriers that sometimes exist between business and say, so for example, I always advise there's an opportunity here to consolidate your sensitive data, your PII, your personally identifiable information and if three different business units have the same source of truth for that, there's an opportunity to consolidate that into one. Well, great advice, Lester. Thanks so much. I mean, it's clear that CAPEX investments on data centers are generally not a good investment for most companies. Lester really appreciate, Lester Water CTO of IO Tahoe. Let's watch this short video and we'll come right back. Use cases, data migration, accelerate digitization of business by providing automated data migration workflows that save time in achieving project milestones, eradicate operational risk and minimize labor intensive manual processes that demand costly overhead. Data quality, clean up a data swamp and reestablish trust in the data to enable data science and data analytics. Data governance, ensure that business and technology understand critical data elements and have control over the enterprise data landscape. Data analytics, enablement. Data discovery to enable data scientists and data analytics teams to identify the right data set through self-service for business demands or analytical reporting that are advanced or complex. Regulatory compliance, government mandated data privacy requirements, GDPR, CCPA, EPR, HIPAA and FERPA. Data lake management, identify lake contents, clean up and manage ongoing activity. Data mapping and knowledge graph, create EKG models on business enterprise data with automated mapping to a specific ontology enabling semantic search across all sources in the data estate. Data ops at scale, as a foundation to automate your data management processes. Are you interested in test driving the IO Tahoe platform? Kickstart the benefits of data automation for your business through the IO Labs program. A flexible, scalable sandbox environment on the cloud of your choice with setup, service and support provided by IO Tahoe. Click on the link and connect with a data engineer to learn more and see IO Tahoe in action. Hi everybody, we're back. We're talking about enterprise data automation. The hashtag is data automated and we're going to really dig into data migrations. Data migrations are risky, they're time consuming and they're expensive. Yousef Khan is here, he's the head of partnerships and alliances at IO Tahoe. Coming again from London. Hey, good to see you, Yousef. Thanks very much. Thank you, Dave, great job. So let's set up the problem a little bit and then I want to get into some of the data. I just said migrations are risky, time consuming, expensive, they're oftentimes a blocker for organizations to really get value out of data. Why is that? I think, I mean, all migrations have to start with knowing the facts about your data and you can try and do this manually, but when you have an organization that may have been going for decades or longer, they will probably have a pretty large legacy data estate. So they'll have everything from on-premise mainframes, they may have stuff which is partly in the cloud, but they probably have hundreds, if not thousands of applications and potentially hundreds of different data stores. So I want to dig in to this migration and let's pull up a graphic. It'll talk about, we'll talk about what a typical migration project looks like. So what you see here is, it's very detailed, I know it's a bit of an eye test, but let me call your attention to some of the key aspects of this and then Yousef, I want you to chime in. So at the top here, you see that area graph, that's operational risk for a typical migration project and you can see the timeline and the milestones. That blue bar is the time to test, so you can see the second step data analysis, it's talking 24 weeks, so very time consuming and then let's not get, dig into the stuff in the middle of the fine print, but there's some real good detail there, but go down to the bottom, that's labor intensity in the bottom and you can see, high is that sort of brown and you can see a number of data analysis, data staging, data prep, the trial, the implementation, post implementation fixtures, the transition to BAU, which I think is business as usual. The key thing there is, when you don't understand your data upfront, it's very difficult to scope and to set up a project because you go to business stakeholders and decision makers and you say, okay, we want to migrate these data stores, we want to put them in the cloud most often, but actually you probably don't know how much data is there, you don't necessarily know how many applications it relates to, you don't have the relationships between the data, you don't know the flow of the data, so the direction in which the data is going between different data stores and tables. So you start from a position where you have pretty high risk and if you can't alleviate that risk, you probably stack your project team of lots and lots of people to do the next phase, which is analysis. And so you've set up a project which has got a pretty high cost. The bigger the project, the more people, the heavier the governance, obviously, and then they're then in the phase where they're trying to do lots and lots of manual analysis. Manual analysis, as we all know, and the idea of trying to relate data that's in different data stores, relating individual tables and columns, very, very time consuming, expensive if you're hiring in resource from consultants or systems integrators externally, you might need to buy or to use third-party tools. As I said earlier, the people who understand some of those systems may have left a while ago. So you're in a high-risk, high-cost situation from the off, and the same thing sort of develops through the project. What you find with IOTAHO is that we're able to automate a lot of this process from the very beginning because we can do the initial data discovery run, for example, automatically. So you very quickly have an automated view of the data, a data map and the data flow that's been generated automatically, much less time and effort and much less cost of money. And now let's bring up the same chart but with sort of an automation injection in here. And now, so you now see the sort of, as Coach said, celebrated by IOTAHO, okay, great, and we're going to talk about this, but look what happens to the operational risk, a dramatic reduction in that graph. And then look at the bars, the bars, those blue bars, data analysis went from 24 weeks down to four weeks, and then look at the labor intensity. It was all these were high, data analysis, data staging, data prep, trial, post-implementation fixtures and transition to BAU. All of those went from high labor intensity, so we've now attacked that and gone to low labor intensity. Explain how that magic happened. Let's take the example of a data catalog. So every large enterprise wants to have some kind of repository where they put all their understanding about their data, an enterprise data catalog, if you like. Imagine trying to do that manually. You'd need to go into every individual data store. You need a DBA and a business analyst for each data store. They'd need to do an extract of the data, they need to do the tables individually. They need to cross-reference that with other data stores and schemas and tables. You've probably done that with the mother of all Excel spreadsheets, and it would be a very, very difficult exercise to do. I mean, in fact, one of our reflections as we automate lots of these things is it accelerates the ability to automate, but in some cases, it also makes it possible for enterprise customers with legacy systems, take banks, for example, they quite often end up staying on mainframe systems that they've had in place for decades and not migrating away from them because they're not able to actually do the work of understanding the data, dedubicating the data, deleting data that isn't relevant, and then confidently going forward to migrate. So they stay where they are with all the attendant problems of systems that are out of support. You'll know, the biggest frustration for lots of them and the thing that they spend far too much time doing is trying to work out what the right data is and cleaning data, which really you don't want a highly paid data scientist doing with their time. But if you sort out your data stake in the first place, get rid of dedubication, perhaps migrate to a cloud store where things are more readily accessible and it's easy to build connections and to use native machine learning tools, you're well on the way up the data maturity curve and you can start to use some of those more advanced applications. Massive opportunities, not only for technology companies, but for those organizations that can apply technology for business advantage. Yousef Khan, thanks so much for coming on theCUBE. Brilliant, thank you very much, appreciate it.