 From around the globe, it's theCUBE presenting Adaptive Data Governance. Brought to you by Io-Tahoe. And we're back with the data automation series. In this episode, we're going to learn more about what Io-Tahoe is doing in the field of adaptive data governance, how it can help achieve business outcomes and mitigate data security risks. I'm Lisa Martin, and I'm joined by A.J. Bahora, the CEO of Io-Tahoe and Lester Waters, the CTO of Io-Tahoe. Gentlemen, it's great to have you on the program. Thank you, Lisa. It's good to be back. Great to see you, Lisa. Likewise, very socially distant, of course, as we are. Lisa, we're going to start with you. What's going on at Io-Tahoe? What's new? Well, I've been with Io-Tahoe for a little over the year. And one thing I've learned is every customer needs are just a bit different. So we've been working on our next major release of the Io-Tahoe product. To really try to address these customer concerns because we want to be flexible enough in order to come in and not just profile the data and not just understand data quality and lineage, but also to address the unique needs of each and every customer that we have. And so that required a platform rewrite of our product so that we could extend the product without building a new version of the product. We wanted to be able to have pluggable modules. We also focused a lot on performance. It's very important with the bulk of data that we deal with that we're able to pass through that data in a single pass and do the analytics that are needed, whether it's lineage, data quality, or just identifying the underlying data. And we're incorporating all that we've learned. We're tuning up our machine learning. We're analyzing on more dimensions than we've ever done before. We're able to do data quality without doing an initial regex for example, just out of the box. So I think it's all of these things are coming together to form our next version of our product and we're really excited by it. Sounds exciting. AJ from the CEO's level, what's going on? Well, I think just building on that, but Lester just mentioned that it's we're growing pretty quickly with our partners and today here with Oracle are excited to explain how that's shaping up. Lester collaboration already with Oracle in government, in insurance, and in banking. And we're excited because we get to have an impact. It's real satisfying to see how we're able to help businesses transform and redefine what's possible with their data and having Oracle there as a partner to lean in with is definitely helping. Excellent, we're going to dig into that a little bit later. Lester, let's go back over to you, explain adaptive data governance, help us understand that. Really adaptive data governance is about achieving business outcomes through automation. It's really also about establishing a data driven culture and pushing what's traditionally managed in IT out to the business. And to do that, you've got to enable an environment where people can actually access and look at the information about the data, not necessarily access the underlying data because we've got privacy concerns as well, but they need to understand what kind of data they have, what shape it's in, what's dependent on it upstream and downstream, and so that they can make their educated decisions on what they need to do to achieve those business outcomes. A lot of frameworks these days are hardwired. So you can set up a set of business rules and that set of business rules works for a very specific database and a specific schema. But imagine a world where you could just say, the start date of a loan must always be before the end date of a loan and having that generic rule, regardless of the underlying database and applying it, even when a new database comes online and having those rules applied, that's what adaptive data governance is about. I like to think of it as the intersection of three circles. Really, it's the technical metadata coming together with policies and rules and coming together with the business ontologies that are unique to that particular business. And this, all of this, bringing this all together allows you to enable rapid change in your environment. So it's a mouthful adaptive data governance but that's what it kind of comes down to. So AJ, help me understand this. Is this what enterprise companies are doing now or are they not quite there yet? Well, at least I think every organization is going at its pace, but markets are changing the economy and the speed at which some of the changes in the economy are happening is compelling more businesses to look at being more digital in how they serve their own customers. So what we're seeing is a number of trends here from heads of data, chief data officer, CIO, stepping back from a one-size-fits-all approach because they've tried that before and it just hasn't worked. They've spent millions of dollars on IT programs trying to drive value from that data. And they've ended up with large teams of manual processing around data to try and hardwire these policies to fit with the context and each line of business and that hasn't worked. So the trends that we're seeing emerge really relate to how do I, as a chief data officer, as a CIO, inject more automation into a lot of these common tasks? And we've been able to see that impact. I think the news here is, if you're trying to create a knowledge graph, a data catalog, or a business glossary and you're trying to do that manually, well, stop. You don't have to do that manually anymore. I think best example I can give is Lester and I, we like Chinese food and Japanese food. And if you were sitting there with your chopsticks, you wouldn't eat the bowl of rice with the chopsticks one grain at a time. What you'd want to do is to find a more productive way to enjoy that meal before it gets cold. And that's similar to how we're able to help organizations to digest their data is to get through it faster, enjoy the benefits of putting that data to work. And if it was me eating that food with you guys, I would be not using chopsticks, I would be using a fork and probably a spoon. So Lester, how then does IOTA who go about doing this and enabling customers to achieve this? Let me show you a little story up here. So if you take a look at the challenges that most customers have, they're very similar, but every customer is on a different data journey. So, but it all starts with what data do I have? What questions or what shape is that data in? How is it structured? What's dependent on an upstream and downstream? What insights can I derive from that data? And how can I answer all of those questions automatically? So if you look at the challenges for these data professionals, they're either on a journey to the cloud, maybe they're doing a migration to Oracle, maybe they're doing some data governance changes, and it's about enabling this. So if you look at these challenges, and I'm going to take you through a story here, and I want to introduce Amanda. Amanda's not like anyone in any large organization. She's looking around and she just sees stacks of data. I mean, different databases, the one she knows about, the one she doesn't know about, but should know about various different kinds of databases, and Amanda's just asking with understanding all of this so that they can embark on her data journey program. So Amanda goes through and she's great. I've got some handy tools. I can start looking at these databases and getting an idea of what we've got. Well, as she digs into the databases, she starts to see that not everything is as clear as she might have hoped it would be. Property names or column names or have ambiguous names like attribute one and attribute two or maybe date one and date two. So Amanda's starting to struggle, even though she's got tools to visualize and look at these databases, she still knows she's got a long road ahead. And with 2,000 databases in her large enterprise, yes, it's going to be a long journey. But Amanda's smart. So she pulls out her trusty spreadsheet to track all of her findings. And what she doesn't know about, she raises a ticket or maybe tries to track down the owner to find what the data means. And she's tracking all this information. Clearly this doesn't scale that well for Amanda. So maybe the organization will get 10 Amanda's to sort of divide and conquer that work. But even that doesn't work that well because there's still ambiguities in the data. With IA TAHO, what we do is we actually profile the underlying data. By looking at the underlying data, we can quickly see that attribute one looks very much like a US Social Security number and attribute two looks like a ICD 10 medical code. And we do this by using ontologies and dictionaries and algorithms to help identify the underlying data and then tag it. Key to doing this automation is really being able to normalize things across different databases so that where there's differences in column names, I know that in fact they contain the same data. And by going through this exercise with IA TAHO, not only can we identify the data, but we also can gain insights about the data. So for example, we can see that 97% of that time, that column named attribute one that's got US Social Security numbers has something that looks like a Social Security number. But 3% of the time it doesn't quite look right. Maybe there's a dash missing, maybe there's a digit dropped or maybe there's even characters embedded in it. So that may be indicative of the data quality issues. So we try to find those kind of things. Going a step further, we also try to identify data quality relationships. So for example, we have two columns one, date one, date two. Through observation, we can see that date one, 99% of the time is less than date two. 1% of the time it's not probably indicative of the data quality issue, but going a step further, we can also build a business rule that says date one is less than date two. And so then when it pops up again, we can quickly identify and remediate that problem. So these are the kinds of things that we can do with IOTIO. Going even a step further, you can take your favorite data science solution, productionize it and incorporate it into our next version as a, what we call a worker process to do your own bespoke analytics. Bespoke analytics, Iceland Lester, thank you. So AJ, talk us through some examples of where you're putting this to use. And also, what is some of the feedback from some customers? Well, I think it helped do this, bring it to life a little bit. Lisa is just to talk through a case to me. We pulled something together. I know it's available for download, but in a well-known telecommunications media company, they had a lot of the issues that Lester just spoke about. Lots of teams of Amanda's. Super bright data practitioners. And really looking to get more productivity out of their day and deliver a good result for their own customers or cell phones, subscribers and broadband users. So, some of the examples that we can see here is how we went about auto-generating a lot of that understanding of that data within hours. So, Amanda had her data catalog populated automatically, a business glossary built up and could really then start to see, okay, where do I want to apply some policies to the data to set in place some controls? Where they want to adapt how different lines of business, maybe tax versus customer operations have different access or permissions to that data. And what we've been able to do there is to build up that picture to see how does data move across the entire organization across the state and monitor that over time for improvement. So, taken it from being a reactive, let's do something to fix something to now more proactive. We can see what's happening with our data who's using it, who's accessing it, how it's being used, how it's being combined. And from there, taking a proactive approach is a real smart use of the talents in that telco organization and the folks that work there with data. Okay, Ayesha, dig into that a little bit deeper. And one of the things I was thinking when you were talking through some of those outcomes that you're helping customers achieve is ROI. How do customers measure ROI? What are they seeing with IO to host solution? Yeah, right now, the big ticket item is time to value. And I think in data, a lot of the upfront investment costs are quite expensive. They have been today with a lot of the larger vendors and technologies. So, what a CIO and economic buyer really needs to be certain of is how quickly can I get that ROI? And I think we've got something we can show, just pull up a before and after. And it really comes down to hours, days and weeks where we've been able to have that impact. And in this playbook that we pulled together the before and after picture really shows those savings that can be delivered through providing data into some actionable form within hours and days to drive agility. But at the same time being able to enforce the controls to protect the use of that data who has access to it. So, at least the number one thing I'd have to say is time. And we can see that on the graphic that we've just pulled up here. We talk about achieving adaptive data governance. Lester, you guys talk about automation, you talk about machine learning. How are you seeing those technologies being a facilitator of organizations adopting adaptive data governance? Well, as we see, the day's manual effort are out. So, I think this is a multi-step process, but the very first step is understanding what you have and normalizing that across your data estate. So, you couple this with the ontologies that are unique to your business areas and algorithms and you basically go across them and you identify and tag that data. That allows for the next steps to happen. So, now I can write business rules, not in terms of columns, named columns, but I can write them in terms of the tags. Being able to automate that as a huge time saver. And the fact that we can suggest that as a rule rather than waiting for a person to come along and say, oh, wow, okay, I need this rule, I need this rule. These are steps that are increased that are, I should say, decreased that time to value that AJ talked about. And then lastly, a couple of machine learning because even with great automation and being able to profile all of your data and getting a good understanding, that brings you to a certain point. But there's still ambiguities in the data. So, for example, I might have two columns, date one and date two. I may have even observed that date one should be less than date two, but I don't really know what date one and date two are other than a date. So, this is where it comes in and I might ask the user said, can you help me identify what date one and date two are in this table? Turns out they're a start date and an end date for a loan. That gets remembered, cycled into machine learning. So, if I start to see this pattern of date one, date two elsewhere, I'm gonna say, is it start date and end date? And bringing all these things together with all this automation is really what's key to enabling this data governance. Your data governance program. Great, thanks, Lester. And AJ, I wanna wrap things up with something that you mentioned in the beginning about what you guys are doing with Oracle. Take us up by telling us what you're doing there, how are you guys working together? Yeah, I think those of us who worked in IT for many years, we've learned to trust Oracle's technology, that they're shifting now to a hybrid on-prem cloud generation two platform, which is exciting and their existing customers and new customers moving to Oracle on a journey. So, Oracle came to us and said, we can see how quickly you're able to help us change mindsets. And as mindsets are locked in a way of thinking around operating models of IT that are maybe not agile and more siloed and they're wanting to break free of that and adopt a more agile API-driven approach. A lot of the work that we're doing with Oracle is around accelerating what customers can do with understanding their data and to build digital apps by identifying the underlying data that has value. And at the time we're able to do that in hours, days and weeks, rather than many months, is opening up the eyes to chief data officers, CIOs to say, well, maybe we can do this whole digital transformation this year. Maybe we can bring that forward and transform who we are as a company. And that's driving innovation, which we're excited about and I know Oracle are keen to drive through. And helping businesses transform digitally is so incredibly important in this time as we look to things changing in 2021. A.J. Lester, thank you so much for joining me on this segment, explaining adaptive data governance, how organizations can use it, benefit from it and achieve ROI. Thanks so much, guys. Thank you. Thanks again, Lisa. In a moment, we'll look at adaptive data governance in banking. This is theCUBE, your global leader in high-tech coverage. Innovation, impact, influence. Welcome to theCUBE. Disruptors, developers and practitioners. Learn from the voices of leaders who share their personal insights from the hottest digital events around the globe. Enjoy the best this community has to offer on theCUBE, your global leader in high-tech digital coverage. Our next segment here is an interesting panel. You're gonna hear from three gentlemen about adaptive data governance. We're gonna talk a lot about that. Please welcome Yusuf Khan, the global director of data services for Io Tahoe. We also have San Diego Caster, the chief data officer at the First Bank of Nigeria and Gudran Van Der Waal, Oracle's senior manager of digital transformation and industries. Gentlemen, it's great to have you joining us in this panel. Thank you. Thanks for having me. All right, San Diego, we're gonna start with you. Can you talk to the audience a little bit about the First Bank of Nigeria and its scale? This is beyond Nigeria. Talk to us about that. Yes. So First Bank of Nigeria was created 125 years ago. So one of the oldest, another oldest bank in Africa. And because of the history, it grew everywhere in the region and beyond the region. I am currently based in London where it's kind of the European headquarters and it really promotes trade, finance, institutional banking, corporate banking, private banking around the world, in particular in relationship to Africa. We are also in Asia, in the Middle East. So San Diego, talk to me about what adaptive data governance means to you and how does it help the First Bank of Nigeria to be able to innovate faster with the data that you have? Yes, I like that concept of adaptive data governance because it's kind of, I would say, an approach that can really happen today with the new technologies before it was much more difficult to implement. So just to give you a little bit of context, I used to work in consulting for 16, 17 years before joining the First Bank of Nigeria. And I saw many organizations trying to apply different type of approaches in data governance. And the beginning, the early days was really kind of a, a rush away, a top-down approach where data governance was seen as implement a set of rules, policies and procedures but really from the top-down. And it's important. It's important to have the back of your C-level of your director but whatever is so just that way, it fails. You really need to have a complementary approach. I often say bottom-up and actually as a CEO, I'm really trying to decentralize data governance to really instead of imposing a framework that some people in the business don't understand or don't care about it, it really needs to come from them. So what I'm trying to say is that data basically support business objectives. And what you need to do is every business area needs information in order to take the decisions to actually be able to be more efficient of the creative value, et cetera. Now, depending on the business questions they have to solve they will need certain data sets. So they need actually to be able to have data quality for their own purpose. Now, when they understand that they become the stewards naturally of their own data sets. And that is where my bottom lines is making my top-down. You can guide them from the top but they need themselves to be also empowered and be actually in a way flexible to adapt the different questions that they have in order to be able to respond to the business needs. Now, I cannot impose a definition for everyone. I need them to adapt and to bring their answers to their own business question. That is adaptive data governance. And all that is possible because we have, and I was saying at the very beginning just to finalize the point, we have new technologies that allow you to do this metadata classification in a very sophisticated way that you can actually create analytics of your metadata. You can understand your different data sources in order to be able to create those classifications like nationalities, a way of classifying your customers, your products, et cetera. So one of the things that you just said, Sanjay kind of struck me to enable the users to be adaptive. They probably don't want to be logging in support ticket. So how do you support that sort of self-service to meet the demand of the user so that they can be adaptive? More and more business users wants autonomy and they want to basically be able to grab the data and answers their own question. Now, when you have that, that's great because then you have demand. The business is asking for data. They are asking for the insights. So how do you actually support that? I would say there is a changing culture that is happening more and more. I would say even the current pandemic has helped a lot into that because you have had in a way, of course, technology is one of the biggest winners. Without technology, we couldn't have been working remotely. Without these technologies where people can actually log in from their homes and still have a market data marketplace where they self-serve their information. But even beyond that, data is a big winner. Data because the pandemic has shown us that crisis happened, that we cannot predict everything and that we are actually facing a new kind of situation out of our comfort zone where we need to explore and we need to adapt and we need to be flexible. How do we do that with data? Every single company either saw the revenue going down or the revenue going very up for those companies that are very digital already. Now, it changed the reality. So they needed to adapt but for that, they needed information in order to think and innovate and try to create responses. So that type of self-service of data, hybrid for data, in order to be able to understand what's happening when the context is changing is something that is becoming more of the topic today because of the pandemic, because of the new capabilities and technologies that allow that. And then you then are allowed to basically help your data citizens, I call them organization, people that know their business and can actually start playing and answer their own questions. So these technologies that gives more accessibility to the data, that gives some cataloging so they can understand where to go and what to find, that gives lineage and relationships. All this is basically the new type of platforms tool that allow you to create what I call a data marketplace. I think these new tools are really strong because they are now allowing for people that are not technology or IT people to be able to play with data because it comes in the digital world they are used to. I give you an example. Without your taco, you have a very interesting search functionality where if you want to find your data, you want to sell, sir, you go there in that search and you actually go and look for your data. Everybody knows how to search in Google, everybody's searching internet. So this is part of the data culture, the digital culture, they know how to use those tools. Now similarly, that data marketplace is in IOTA for example, see which data sources are mostly used. And enabling that speed that we're all demanding today during these unprecedented times. Gudrun, I wanted to go to you so talk about the spirit of evolution, technology is changing, talk to us a little bit about Oracle Digital. What are you guys doing there? Yeah, thank you. Well, Oracle Digital is a business unit at Oracle EMEA and we focus on emerging countries as well as low end enterprises in the mid-market in more developed countries. And four years ago, this started with the idea to engage digital with our customers via central hubs across EMEA. That means engaging with video, having conference calls, having a wall, a green wall where we stand in front and engage with our customers. No one at that time could have foreseen how this is the situation today. And this helps us to engage with our customers in the way we're already doing. And then about my team, the focus of my team is to have early stage conversations with our customers on digital transformation and innovation. And we also have a team of industry experts who engage with our customers and share expertise across EMEA. And we inspire our customers. The outcome of these conversations for Oracle is a deep understanding of our customer needs, which is very important. So we can help the customer. And for the customer means that we will help them with our technology and our resources to achieve their goals. It's all about outcomes, right, Gudrun? So in terms of automation, what are some of the things Oracle is doing there to help your clients leverage automation to improve agility so that they can innovate faster, which in these interesting times, it's demanded. Yeah, yeah, thank you. Well, traditionally, Oracle is known for their databases, which have been innovated year over year, seriously for its launch. And the latest innovation is the autonomous database and autonomous data warehouse. For our customers, this means a reduction in operational cost by 90% with a multimedal converts database and machine learning-based automation for full lifecycle management. Our database is self-driving. This means we automate database provisioning, tuning, and scaling. The database is self-securing. This means automate data protection and security and it's self-repairing. The automate failure detection, failover and repair. And then the question is for our customers, what does it mean? It means they can focus on their business instead of maintaining their infrastructure and their operations. That's absolutely critical. Yusuf, I want to go over to you now. Some of the things that we've talked about, just the massive progression in technology, the evolution of that. But we know that, whether we're talking about data management or digital transformation, a one-size-fits-all approach doesn't work to address the challenges that the business has, that the IT folks have. As you were looking through the industry with what Santiago told us about first Bank of Nigeria, what are some of the changes that you're seeing that Ayo Taho is seeing throughout the industry? Well, Lisa, I think the first way I'd characterize it is to say the traditional kind of top-down approach to data where you have almost a data policeman who tells you what you can and can't do, just doesn't work anymore. It's too slow. It's too resource intensive. Data management, data governments, digital transformation itself, it has to be collaborative and there has to be an element of personalization to data users. In the environment we find ourselves in now, it has to be about enabling self-service as well. A one-size-fits-all model when it comes to those things around data doesn't work. As Santiago was saying, it needs to be adapted to how the data is used and who is using it. And in order to do this, companies, enterprises, organizations really need to know their data. They need to understand what data they hold, where it is and what the sensitivity of it is. They can then, in a more agile way, apply appropriate controls and access so that people themselves and groups within businesses are agile and can innovate. Otherwise everything grinds to a halt and you risk falling behind your competitors. Yeah, that one-size-fits-all term just doesn't apply when you're talking about adaptive and agility. So we heard from Santiago about some of the impact that they're making with First Make of Nigeria. You used to talk to us about some of the business outcomes that you're seeing other customers make leveraging automation that they could not do before. It's automatically being able to classify terabytes of data or even petabytes of data across different sources to find duplicates which you can then remediate and delete. Now, with the capabilities that IOTAO offers and Oracle offers, you can do things not just with a five times or a 10 times improvement but it actually enables you to do projects full stop that otherwise would fail or you would just not be able to do. I mean, classifying multi-terabyte and multi-petabyte estates across different sources, formats, very large volumes of data. In many scenarios, you just can't do that manually. I mean, we've worked with government departments and the issues there as you expect are there is a lot of fragmented data, there's a lot of different sources, there's a lot of different formats and without these newer technologies to address it with automation and machine learning, the project isn't doable, but now it is. And that could lead to a revolution in some of these businesses and organizations. To enable that revolution there's got to be the right cultural mindset. And one of the, when Santiago was talking about folks really kind of adapting to that and the thing I always call that getting comfortably uncomfortable but that's hard for organizations to do. The technology is here to enable that but when you're talking with customers, how do you help them build the trust in the confidence that the new technologies and the new approaches can deliver what they need? How do you help drive the kind of the tech and the culture? It's a really good question because it can be quite scary. I think the first thing we'd start with is to say, look, the technology is here with businesses like IOTAHO and like Oracle, it's already arrived. What you need to be comfortable doing is experimenting, being agile around it and trying new ways of doing things if you don't want to get left behind. And Santiago and the team at FBN are a great example of embracing it, testing it on a small scale and then scaling up. At IOTAHO, we offer what we call a data health check which can actually be done very quickly in a matter of a few weeks. So we'll work with a customer, pick a use case, install the application, analyze the data, drive out some quick wins. So we worked in the last few weeks with a large energy supplier and in about 20 days, we were able to give them an accurate understanding of their critical data elements, apply data protection policies, minimize copies of the data and work out what data they needed to delete to reduce their infrastructure spend. So it's about experimenting on that small scale, being agile and then scaling up in a kind of very modern way. Great advice. Santiago, I'd like to go back to as we kind of look at, again, that topic of culture and the need to get that mindset there to facilitate these rapid changes. I want to understand kind of last question for you about how you're doing that from a digital transformation perspective, we know everything is accelerating in 2020. So how are you building resilience into your data architecture and also driving that cultural change that can help everyone in this shift to remote working and a lot of the digital challenges and changes that we're all going through. The new technologies allowed us to discover the data in a new way, to plug and see very quickly information, to have new models of governing the data and giving autonomy to our different data units. Well, from that autonomy, they can then compose and innovate their own ways. So for me, now we're talking about resilience because in a way autonomy and flexibility in our organization, in a data structure with platform gives you resilience. The organizations and the business units that I have experienced in the pandemic are working well are those that actually, because they're not physically present they're more in the office, you need to give them their autonomy and let them actually engage on their own side and do their own job and trust them in a way. And as you give them that, they start innovating and they start having a really interesting idea. So autonomy and flexibility, I think is a key component of the new infrastructure, but even the new reality, the pandemic show us that yes, we used to be very kind of structure policies, procedures that's very important, but now we learn flexibility and adaptability at the same site. Now, when you have that, a key other component of resilience is speed. Of course, people want to access the data and access it fast and decide fast and especially changes are changing so quickly nowadays that you need to be able to interact iterate with your information to answer your questions quickly. So technology that allows you to be flexible, iterate and in a very fast agile way continue will allow you to actually be resilient in that way because you are flexible, you adapt, you are agile and you continue answering questions as they come without having everything set in a structure that is too hard. We also are a partner of Oracle and Oracle in that is great. They have embedded within the transactional system many algorithms that are allowing us to calculate as the transactions happen. What happened there is that when our customers engage with algorithms and again with Ayotahu as well, the machine learning that is there for speeding the automation of how you find your data allows you to create an alliance with the machine. The machine is there to actually in a way be your best friend to actually have more volume of data calculated faster in a way that it's covered more variety. I mean, we couldn't hope without being connected to these algorithms and all that engagement is absolutely critical. Sanjay, but thank you for sharing that. I do wanna wrap really quickly. Gujran, one last question for you. San Diego talked about Oracle. You've talked about it a little bit as we look at digital resilience. Talk to us a little bit in the last minute about the evolution of Oracle and what you guys are doing there to help your customers get the resilience that they have to have to be to not just survive, but thrive. Yeah. Well, Oracle has a cloud offering for infrastructure, database, platform service and the complete solutions offered as SaaS. And as San Diego also mentioned, we are using AI across our entire portfolio. And by this we help our customers to focus on their business innovation and capitalize on data by enabling new business models. And Oracle has a global coverage with our cloud regions. It's massively investing in innovating and expanding their cloud. And by offering cloud as public cloud in our data centers and also as private cloud with clouded customer, we can meet every sovereignty and security requirements. And then this way we help people to see data in new ways we discover insights and unlock endless possibilities. And maybe one of my takeaways is if I speak with customers, I always tell them, you better start collecting your data now. We enable this. Partners like IOTAO help us as well. If you collect your data now, you are ready for tomorrow. You can never collect your data backwards. So that is my takeaway for today. You can't collect your data backwards. Excellent, Lujan. Gentlemen, thank you for sharing all of your insights, very informative conversation. In a moment, we'll address the question. Do you know your data? Are you interested in test driving the IOTAO platform? Kickstart the benefits of data automation for your business through the IOTAO data health check program. A flexible, scalable, sandbox environment on the cloud of your choice with setup, service, and support provided by IOTAO. Book time with a data engineer to learn more and see IOTAO in action. From around the globe, it's theCUBE. Presenting adaptive data governance. Brought to you by IOTAO. In this next segment, we're going to be talking to you about getting to know your data. And specifically, you're going to hear from two folks at IOTAO. We've got enterprise account exec, Sabita Davis here, as well as enterprise data engineer, Patrick Zymett. They're going to be sharing insights and tips and tricks for how you can get to know your data and quickly. And we also want to encourage you to engage with Sabita and Patrick, use the chat feature to the right, send comments, questions, or feedback so that you can participate. All right, Patrick, Sabita, take it away. All right, thanks Lisa, great to be here. As Lisa mentioned guys, I'm the enterprise account executive here at IOTAO. New Pat. Yeah, hey everyone, so great to be here. As said, my name's Patrick Zymett. I'm the enterprise data engineer here at IOTAO. And we're so excited to be here and talk about this topic as one thing we're really trying to perpetuate is that data is everyone's business. So guys, Pat and I have actually had multiple discussions with clients from different organizations with different roles. So we spoke with both your technical and your non-technical audience. So while they were interested in different aspects of our platform, we found that what they had in common was they wanted to make data easy to understand and usable. So that comes back to Pat's point of data being everybody's business because no matter your role, we're all dependent on data. So what Pat and I wanted to do today was we wanted to walk you guys through some of those client questions slash pain points that we're hearing from different industries and different roles and demo how our platform here at IOTAO is used for automating those data related tasks. So with that said, are you ready for the first one, Pat? Yeah, let's do it. Great. So I'm gonna put my technical hat on for this one. So I'm a data practitioner. I just started my job at ABC Bank. I have like over a hundred different data sources. So I have data kept in data lakes, legacy data sources, even the cloud. So my issue is I don't know what those data sources hold. I don't know what data is sensitive and I don't even understand how that data is connected. So how can I IOTAO help? Yeah, I think that's a very common experience many are facing and definitely something I've encountered in my past. Typically the first step is to catalog the data and then start mapping the relationships between your various data stores. Now more often than not, this has tackled through numerous meetings and a combination of Excel and something similar to Visio, which are two great tools in their own part, but they're very difficult to maintain just due to the rate that we are creating data in the modern world. It starts to beg for an idea that can scale with your business needs. And this is where a platform like IOTAO becomes so appealing. You can see here a visualization of the data relationships created by the IOTAO service. Now what is fantastic about this is it's not only laid out in a very human and digestible format, in the same action of creating this view, the data catalog was constructed. So is the data catalog automatically populated? Correct. Okay, so what I'm using IOTAO, Pat, what I'm getting is this complete unified automated platform without the added cost, of course. Exactly, and that's at the heart of IOTAO. A great feature with that data catalog is that IOTAO will also profile your data as it creates the catalog, assigning some meaning to those pesky column underscore ones and custom variable underscore tens that are always such a joy to deal with. Now by leveraging this interface, we can start to answer the first part of your question and understand where the core relationships within our data exists. Personally, I'm a big fan of this view as it really just helps the I be naturally drawn to these focal points that coincide with these key columns. Following that train of thought, let's examine the customer ID column that seems to be at the center of a lot of these relationships. We can see that it's a fairly important column as it's maintaining the relationship between at least three other tables. Now you'll notice all of the connectors are in this blue color. This means that they're system-defined relationships, but IOTAO goes that extra mile and actually creates these orange colored connectors as well. These are ones that our machine learning algorithms have predicted to be relationships and you can leverage to try and make new and powerful relationships within your data. So this is really cool and I can see how this can be leveraged quickly. Now, what if I added new data sources or multiple data sources and needed to identify what data is sensitive? Can IOTAO detect that? Yeah, definitely. Within the IOTAO platform, there are already over 300 predefined policies such as HIPAA, FERBA, CCPA and the like. One can choose which of these policies to run against their data, allowing for flexibility and efficiency in running the policies that affect your organization. Okay, so 300 is an exceptional number, I'll give you that. But what about internal policies that apply to my organization? Is there any ability for me to write custom policies? Yeah, that's no issue and it's something that clients leverage fairly often. To utilize this function, one simply it has to write a regex that our team has helped many deploy. After that, the custom policy is stored for future use. To profile sensitive data, one then selects the data sources they're interested in and selects the policies that meet your particular needs. The interface will automatically tag your data according to the policies it detects, after which you can review the discoveries confirming or rejecting the tagging. All of these insights are easily exported through the interface, so one can work these into the action items within your project management systems. And I think this lends to the collaboration as a team can work through the discovery simultaneously and as each item is confirmed or rejected, they can see it not instantaneously. All this translates to a confidence that with IOTAHO, you can be sure you're in compliance. So I'm glad you mentioned compliance because that's extremely important to my organization. So what you're saying, when I use the IOTAHO automated platform, we'd be 90% more compliant than if we were, other than if we were going to be using a human. Yeah, definitely. The collaboration and documentation that the IOTAHO interface lends itself to can really help you build that confidence that your compliance is sound. So we're planning a migration and I have a set of reports I need to migrate. But what I need to know is what data sources those reports are dependent on and what's feeding those tables. Yeah, it's a fantastic question, Sabita, identifying critical data elements and the interdependencies within the various databases can be a time consuming but vital process in the migration initiative. Luckily, IOTAHO does have an answer. And again, it's presented in a very visual format. So what I'm looking at here is my entire data landscape. Yes, exactly. Let's add another data source. I can still see that unified 360 view. Yeah, one feature that is particularly helpful is the ability to add data sources after the data lineage discovery has finished. Allowing for the flexibility and scope necessary for any data migration project. If you only need to select a few databases or your entirety, this service will provide the answers you're looking for. This visual representation of the connectivity makes the identification of critical data elements a simple matter. The connections are driven by both system defined flows as well as those predicted by our algorithms. The confidence of which can actually be customized to make sure that they're meeting the needs of the initiative that you have in place. This also provides a tabular output in case you needed for your own internal documentation or for your action items, which we can see right here. In this interface, you can actually also confirm or deny the pair rejection, the pair directions, allowing to make sure that the data is as accurate as possible. Does that help with your data lineage needs? Definitely. So Pat, my next big question here is, so now I know a little bit about my data. How do I know I can trust it? So what I'm interested in knowing really is, is it in a fit state for me to use it? Is it accurate? Does it conform to the right format? Yeah, that's a great question. I think that is a pain point felt across the board, be it by data practitioners or data consumers alike. Another service that Io Tahoe provides is the ability to write custom data quality rules and understand how well the data pertains to these rules. This dashboard gives a unified view of the strength of these rules and your data's overall quality. Okay, so Pat, so on the accuracy scores there, so if my marketing team needs to run a campaign, can we depend on those accuracy scores to know what tables have quality data to use for our marketing campaign? Yeah, this view would allow you to understand your overall accuracy as well as dive into the minutia to see which data elements are of the highest quality. So for that marketing campaign, if you need everything in a strong form, you'll be able to see very quickly with these high level numbers, but if you're only dependent on a few columns to get that information out the door, you can find that within this view. So you no longer have to rely on reports about reports, but instead just come to this one platform to help drive conversations between stakeholders and data practitioners. So I get now the value that Io Tahoe brings by automatically capturing all those technical metadata from sources, but how do we match that with the business glossary? Yeah, within the same data quality service that we just reviewed, one can actually add business rules, detailing the definitions and the business domains that these fall into. What's more is that the data quality rules we're just looking at can then be tied into these definitions, allowing insight into the strength of these business rules. It is this service that empowers stakeholders across the business to be involved with the data lifecycle and take ownership over the rules that fall within their domain. Okay, so those custom rules, can I apply that across data sources? Yeah, you can bring in as many data sources as you need, so long as you can tie them to that unified definition. Okay, great, thanks so much, Pat. And we just want to quickly say to everyone working in data, we understand your pain, so please feel free to reach out to us via our website, the chat below or LinkedIn, and let's get a conversation started on how IOTAHO can help you guys automate all those manual tasks to help save you time and money. Thank you. Thank you, everyone. Hey Pat, if I could ask you one quick question. How do you advise customers? You just walk through this great example, this banking example that you instigated to talk through. How do you advise customers get started? Yeah, I think the number one thing that customers can do to get started with our platform is to just run the tag discovery and build out that data catalog. It lends itself very quickly to the other needs you might have, such as these quality rules, as well as identifying those kind of tricky columns that might exist in your data, those custom variable underscore 10s I mentioned before. The last question is to be to anything to add to what Pat just described as a starting place. No, I think actually Pat summed it up pretty well. I mean, just by automating all those manual tasks, I mean, it definitely can save your company a lot of time and money. So we encourage you just reach out to us, let's get that conversation started. Excellent, Sabita and Pat, thank you so much. We hope you have learned a lot from these folks about how to get to know your data and make sure that it's quality so that you can maximize the value of it. Thanks for watching. Thanks again, Lisa, for that very insightful and useful deep dive into the world of adaptive data governance with IOTAHO, Oracle, First Bank of Nigeria. This is Dave Vellante. You won't want to miss IOTAHO's fifth episode in the data automation series. In that, we'll talk to experts from Red Hat and Happiest Minds about their best practices for managing data across hybrid cloud, inter-cloud, multi-cloud, IT environment. So market calendar for Wednesday, January 27th. That's episode five. You're watching theCUBE, Global Leader, Digital Event Technician.