 From around the globe, it's theCUBE. Presenting enterprise digital resilience on hybrid and multi-cloud brought to you by IOTAHO. Hello everyone and welcome to our continuing series covering data automation brought to you by IOTAHO. Today we're going to look at how to ensure enterprise resilience for hybrid and multi-cloud. Let's welcome in Ajay Vajora who's the CEO of IOTAHO. Ajay, always good to see you again. Thanks for coming on. Great to be back, David, pleasure. And he's joined by Fadzi Ushawa Kunze who is a global principal architect for financial services, the vertical of financial services at Red Hat. He's got deep experiences in that sector. Welcome Fadzi, good to see you. Thank you very much. Happy to be here. Fadzi, let's start with you. Look, there are a lot of views on cloud and what it is. I wonder if you could explain to us how you think about what is a hybrid cloud and how it works. Sure, yes. So a hybrid cloud is an IT architecture that incorporates some degree of workload portability, orchestration and management across multiple clouds. Those clouds could be private clouds or public clouds or even your own data centers. And how does it all work? It's all about secure interconnectivity and on demand allocation of resources across clouds. And separate clouds can become hybrid when they're seamlessly interconnected. And it is that interconnectivity that allows the workloads to be moved and how management can be unified and orchestration can work. And how well you have these interconnections has a direct impact on how well your hybrid cloud will work. Okay, so well Fadzi, staying with you for a minute. So in the early days of cloud, that term private cloud was thrown around a lot, but it often just meant virtualization of an on-prem system and a network connection to the public cloud. Let's bring it forward. What in your view does a modern hybrid cloud architecture look like? Sure, so for modern hybrid clouds, we see that teams or organizations need to focus on the portability of applications across clouds. That's very important, right? And when organizations build applications, they need to build and deploy these applications as small collections of independently loosely coupled services. And then have those things run on the same operating system, which means in other words, running it on Linux everywhere and building cloud native applications and being able to manage and orchestrate these applications with platforms like Kubernetes or Reddit open shoot, for example. Okay, so that's Fadzi, that's definitely different from building a monolithic application that's fossilized and doesn't move. So what are the challenges for customers to get to that modern cloud as you've just described it? Is it skill sets? Is it the ability to leverage things like containers? What's your view there? So I mean, from what we've seen around the industry, especially around financial services where I spend most of my time, we see that the first thing that we see is management, right? Now, because you have all these clouds and all these applications, you have a massive array of connections, of interconnections. You also have massive array of integrations, portability and resource allocations as well. And then orchestrating all those different moving pieces, things like storage networks and things like that. Those are really difficult to manage, right? That's one, so management is the first challenge. The second one is workload placement. Where do you place this cloud? How do you place these cloud native applications? Do you, what do you keep on site or on-prem and what do you put in the cloud? That is the other challenge. The major one, the third one is security. Security now becomes the key challenge in concern for most customers. And we could talk about how to address that. Yeah, we're definitely going to dig into that. Let's bring AJ into the conversation. AJ, you and I have talked about this in the past. One of the big problems that virtually every company face is data fragmentation. Talk a little bit about how IOTAHO unifies data across both traditional systems, legacy systems, and it connects to these modern IT environments. Yeah, short day. I mean, if I had to just nailed it there, it used to be about data, the volume of data, and the different types of data, but as applications become more connected and interconnected, the location of that data really matters, how we serve that data up to those apps. So working with Red Hat and our partnership with Red Hat, being able to inject our data discovery, machine learning into these multiple different locations, whether it be an AWS or an IBM cloud or a GCP or on-prem, being able to automate that discovery and pulling that, that single view of where is all my data, then allows the CIO to manage costs that can do things like, one, I keep the data where it is on-premise or in my Oracle cloud or in my IBM cloud and connect the application that needs to feed off that data. And the way in which we do that is machine learning that learns over time as it recognizes different types of data, applies policies to classify that data and brings that all together with automation. Right, and that's one of the big themes that we've talked about this on earlier episodes is really simplification, really abstracting a lot of that heavy lifting away so we can focus on things, AJ, as you just mentioned. I mean, Fadzi, one of the big challenges that, of course, we all talk about is governance across these disparate data sets. I'm curious as your thoughts, how does Red Hat really think about helping customers adhere to corporate edicts and compliance regulations, which of course are particularly acute within financial services? Oh, yeah, yeah. So for banks and payment providers, like you've just mentioned there, insurers and many other financial services firms, they have to adhere to standards such as say PCI DSS and in Europe, you've got the GDPR, which requires stringent tracking, reporting, documentation, and for them to remain in compliance. And the way we recommend our customers to address these challenges is by having an automation strategy, right? And that type of strategy can help you to improve the security and compliance of your organization and reduce the risk after the business, right? And we help organizations build security and compliance from the start with our consulting services, residencies, we also offer courses that help customers to understand how to address some of these challenges. And thus also we help organizations build security into the applications with our open source and middleware, middleware offerings and even using a platform like OpenShift because it allows you to run legacy applications and also containerize applications in a unified platform, right? And also that provides you with the automation and the tooling that you need to continuously monitor, manage, and automate the systems for security and compliance purposes. AJ, any color you could add to this conversation? Yeah, I'm pleased to have brought up OpenShift. I mean, we're using OpenShift to be able to take that security application of controls to the data level, and it's all about context. So understanding what data is there, being able to assess it to say who should have access to it, which application permission should be applied to it. That's a great combination of Red Hat and Idaho. Patsy, what about multi-cloud? Doesn't that complicate the situation even further? Maybe you could talk about some of the best practices to apply automation across not only hybrid cloud, but multi-cloud as well. Yeah, sure, yeah. So the right automation solution can be the difference between cultivating an automated enterprise or automation chaos. And some of the recommendations we give our clients is to look for an automation platform that can offer the first thing is complete support. That means have an automation solution that provides, you know, promotes IT availability and reliability with your platform so that you can provide, you know, enterprise great support, including security and testing integration and clear roadmaps. The second thing is vendor interoperability in that you are going to be integrating multiple clouds. So you're going to need a solution that can connect to multiple clouds seamlessly, right? And with that comes the challenge of maintainability. So you're going to need to look into a automation solution that is easy to learn or has an easy learning curve. And then the fourth idea that we tell our customers is scalability. In the hybrid cloud space, scale is the big, big deal here. And you need to deploy an automation solution that can span across the whole enterprise in a consistent manner, right? And then also that allows you finally to integrate the multiple data centers that you have. So AJ, I mean, this is a complicated situation if a customer has to make sure things work on AWS or Azure or Google, they're going to spend all their time doing that. What can you add really to simplify that multi-cloud and hybrid cloud equation? Yeah, I can give a few customer examples here warming a manufacturer that we've worked with to drive that simplification and the real bonuses for them has been a reduction in cost. We worked with them late last year to bring the cost spend down by $10 million in 2021. So they could hit that reduced budget. And what we brought to that was the ability to deploy using OpenShift templates into their different environments, whether it was on-premise and or in, as you mentioned, AWS, they had GCP as well for their marketing team. And across those different platforms, being able to use a template, use pre-built scripts to get up and running and catalog and discover that data within minutes, it takes away the legacy of having teams of people having to jump on workshop calls. And I know we're all on a lot of teams, Zoom calls in these current times. They're just symphonies in the hours of the day to manually perform all of this. So yeah, working with Red Hat, applying machine learning into those templates, those little recipes that we can put that automation to work, regardless of which location the data's in, allows us to pull that unified view together. Great, thank you. Fazia, I want to come back to you. So the early days of cloud, you're in the big Apple financial services really well. Cloud was like an evil word within financial services. And obviously that's changed, it's evolved. We talk about the pandemic has even accelerated that. And when you really, you know, dug into it, when you talk to customers about their experiences with security in the cloud, it was not that it wasn't good, it was great, whatever, but it was different. And there's always this issue of skill, lack of skills and multiple tools, sec-op teams are really overburdened. But in the cloud requires, you know, new thinking. You've got the shared responsibility model. You've got to obviously have specific corporate, you know, requirements and compliance. So this is even more complicated when you introduce multiple clouds. So what are the differences that you can share from your experiences running on a, sort of either on-prem or on a mono cloud, or, you know, in versus across clouds? What do you suggest there? Yeah, you know, because of this complexity is that you've explained here, misconfigurations and inadequate change control are the top security threats. So human error is what we want to avoid because as, you know, as your clouds grow with complexity and you put humans in the mix, then the rate of errors is going to increase and that is going to expose you to security threats. So this is when automation comes in because automation will streamline and increase the consistency of your infrastructure management, also application development and even security operations to improve and your protection, compliance and change control. So you want to consistently configure resources according to pre-approved, you know, pre-approved policies and you want to proactively maintain them in a, you know, repeatable fashion over the whole life cycle. And then you also want to rapid the identified system that require patches and reconfiguration and automate that process of patching and reconfiguring so that you don't have humans doing this type of thing, right? And you want to be able to easily apply patches and change our system settings according to a pre-defined baseline like I explained before, you know, with the pre-approved policies and also you want ease of auditing and troubleshooting, right? And from a radar perspective, we provide tools that enable you to do this. We have, for example, a tool called Ansible that enables you to automate data center operations and security and also deployment of applications and also Opyshift itself, you know, automates most of these things and obstructs the human beings from putting their fingers and causing, you know, potentially introducing errors, right? Now, in looking into the, you know, new world of multiple clouds and so forth, the differences that we're seeing here between running a single cloud or on-prem is three main areas, which is control, security and compliance, right? Control here, it means if you're on-premise or you have one cloud, you know, in most cases you have control over your data and your applications, especially if you're on-prem. However, if you're in the public cloud, there is a difference. They're the ownership, it is still yours, but your resources are running on somebody else's or the public clouds, EWS and so forth infrastructure. So people that are going to do this need to really, especially banks and governments, need to be aware of the regulatory constraints of running those applications in the public cloud. And we also help customers rationalize some of these choices. And also on security, you will see that if you're running on-premises or in a single cloud, you have more control, especially if you're on-prem, you can control the sensitive information that you have. However, in the cloud, that's a different situation, especially from personal information of employees and things like that, you need to be really careful of that. And also, again, we help you rationalize some of those choices. And then the last one is compliance. As well, you see that if you're running on-prem or in single cloud, regulations come into play again, right? And if you're running on-prem, you have control over that. You can document everything. You have access to everything that you need. But if you're going to go to the public cloud, again, you need to think about that. We have automation and we have standards that can help you address some of these challenges where security can comply. So that's really strong insights, Fadzi. I mean, first of all, Ansible has a lot of market momentum. You know, Red Hat's done a really good job with that acquisition. Your point about repeatability is critical because you can't scale otherwise. And then that idea, you're putting forth around control, security and compliance, it's so true. As I called it, the shared responsibility model. And there was a lot of misunderstanding in the early days of cloud. I mean, yeah, maybe AWS is going to physically secure the S3 in the bucket, but we saw so many misconfigurations early on. And so it's key to have partners that really understand this stuff and can share the experiences of other clients. So this all sounds great, AJ. You're a sharp financial background. What about the economics? Our survey data shows that security, it's at the top of the spending priority list, but budgets are stretched thin. I mean, especially when you think about the work from home pivot and all the areas that they had, the holes that they had to fill there, whether it was laptops, new security models, et cetera. So how do organizations pay for this? What's the business case look like in terms of maybe reducing infrastructure costs so I can pay it forward or there's a risk reduction angle, what can you share there? Yeah, I mean, the perspective I'd like to give here is not being multi-cloud as multi-copies of an application or data. When I think back 20 years, a lot of the work in financial services I was looking at was managing copies of data that were feeding different pipelines, different applications. Now, what we're seeing at Idaho, a lot of the work that we're doing is reducing the number of copies of that data. So that if I've got a product lifecycle management set of data, if I'm a manufacturer, I'm just gonna keep that in one location. But across my different clouds, I'm gonna have best of breed applications developed in-house, third parties in collaboration with my supply chain, connecting securely to that single version of the truth. What I'm not gonna do is to copy that data. So a lot of what we're seeing now is that interconnectivity using applications built on Kubernetes and that decoupled from the data source that allows us to reduce those copies of data. Within that, you're gaining from the security, capability and resilience because you're not leaving yourself open to those multiple copies of data. And with that comes cost of storage and cost of compute. So what we're seeing is using multi-cloud to leverage the best of what each cloud platform has to offer. And that goes all the way to Snowflake and Heroku on cloud managed databases too. Well, and the people cost too as well when you think about, yes, the copy creep, but then when something goes wrong, a human has to come in and figure it out. You brought up Snowflake, you get this vision of the data cloud, which is data, I think this, we're going to be rethinking, A.J., data architectures in the coming decade where data stays, where it belongs, it's distributed and you're providing access. Like you said, you're separating the data from the applications, applications, as we talked about with Fodzi, much more portable. So it's really the last 10 years will be different than the next 10 years, A.J. Definitely. I think the people cost production is used. Gone are the days where you needed to have a dozen people governing, managing and planning policies to data. A lot of that repetitive work, those tasks can be in our automated. We've seen examples in insurance where we've reduced teams of 15 people working in the back office, trying to apply security controls, compliance down to just a couple of people who are looking at the exceptions that don't fit. And that's really important because maybe two years ago, the emphasis was on regulatory compliance of data with policies such as GDPR and CCPA. Last year, very much the economic effect of reduced headcounts and enterprises running lean, looking to reduce that cost. This year, we can see that already some of the more proactive companies are looking at initiatives such as net zero emissions. How do they use data to understand how they can become more, have a better social impact and using data to drive that. And that's across all of their operations and supply chain. So those regulatory compliance issues that may have been external, we see similar patterns emerging for internal initiatives that are benefiting the environment, social impact and of course costs. Great perspectives. Jeff Hamerbacher once famously said the best minds of my generation are trying to get people to click on ads and AJ, those examples that you just gave of social good and moving things forward are really critical. And I think that's where data is going to have the biggest societal impact. Okay guys, great conversation. Thanks so much for coming in the program. Really appreciate your time. All right, keep it right there for more insight and conversation around creating a resilient digital business model. You're watching theCUBE. Digital resilience automated. Compliance, privacy and security for your multi-cloud. Congratulations, you're on the journey. You have successfully transformed your organization by moving to a cloud-based platform to ensure business continuity in these challenging times. But as you scale your digital activities, there is an inevitable influx of users that outpaces traditional methods of cybersecurity, exposing your data to underlying threats and making your company susceptible to ever greater risk. To become digitally resilient, have you applied controls to your data continuously throughout the data life cycle? What are you doing to keep your customer and supplier data private and secure? IOTAHO's automated sensitive data discovery is pre-programmed with over 300 existing policies that meet government-mandated risk and compliance standards. These automate the process of applying policies and controls to your data. Our algorithm-driven recommendation engine alerts you to risk exposure at the data level and suggests the appropriate next steps to remain compliant and ensure sensitive data is secure. Unsure about where your organization stands in terms of digital resilience. Sign up for our minimal cost, commitment-free data health check. Let us run our sensitive data discovery on key unmapped data silos and sources to give you a clear understanding of what's in your environment. Booktime with an IOTAHO engineer now. Okay, let's now get into the next segment where we'll explore data automation but from the angle of digital resilience within an as-a-service consumption model. We're now joined by Yusef Khan, who heads data services for IOTAHO and Suresh Kanyapan, who's the vice president and head of U.S. sales at Happiest Minds. Gents, welcome to the program. Great to have you on theCUBE. Thank you, David. Suresh, you guys talk about at Happiest Minds this notion of born digital, born agile. I like that, but talk about your mission at the company. Sure, in 2011, Happiest Minds is a born digital, born agile company. The reason is that we are focused on customers. Our customer-centric approach and delivering digital and seamless solutions have helped us be in the race along with the tier one providers. Our mission, Happiest People, Happiest Customers, is focused to enable customer happiness through people happiness. We have been ranked among the top 25 IT services company in the great places to work survey. Our Glassdoor ratings of 4.1 against the rating of five is among the top in the Indian IT services company. That shows the mission and the culture what we have built on the values, right? This sharing, mindful, integrity, learning and social responsibilities are the core values of our company and that's where the entire culture of the company has been built. That's great. It sounds like a happy place to be. Now, Yusef, you had updated services for IOTAHO. We've talked in the past, of course you're out of London. What's your day-to-day focus with customers and partners? What are you focused on? Well, David, my team work daily with customers and partners to help them better understand their data, improve their data quality, their data governance and help them make that data more accessible in a self-service kind of way to the stakeholders within those businesses. And this is all a key part of digital resilience that we'll come on to talk about a bit later. You're right. I mean, that self-service theme is something that we're going to really accelerate this decade, Yusef. And so, but I wonder before we get into that, maybe you could talk about the nature of the partnership with Happiest Minds. You know, why do you guys choose to work closely together? Very good question. We see IOTAHO and Happiest Minds as a great mutual fit. As Suresh has said, Happiest Minds are a very agile organization. I think that's one of the key things that attracts their customers. And IOTAHO is all about automation. We're using machine learning algorithms to make data discovery, data cataloging, understanding data redundancy much easier and we're enabling customers and partners to do it much more quickly. So when you combine our emphasis on automation with the emphasis on agility that Happiest Minds have, that's a really nice combination. Work works very well together, very powerful. I think the other things that are key are both businesses, as Suresh has said, are really innovative, digital native-type companies, very focused on newer technologies, the cloud, et cetera. And then finally, I think they're both challenger brands and Happiest Minds have a really positive, fresh, ethical approach to people and customers that really resonates with us at IOTAHO too. It's great, thank you for that. Suresh, let's get into the whole notion of digital resilience. I want to sort of set it up with what I see and maybe you can comment. Be prior to the pandemic, a lot of customers that kind of equated disaster recovery with their business continuance or business resilience strategy and that's changed almost overnight. How have you seen your clients respond to that what I sometimes call the forced march to become a digital business? And maybe you could talk about some of the challenges that they've faced along the way. Absolutely. So especially during this pandemic times, when you see Dave, customers have been having tough times managing their business. So Happiest Minds being a digital resilient company. We were able to react much faster in the industry apart from the other services company. So one of the key things is the organizations trying to adopt onto the digital technologies, right? There has been a lot of data which has been to be managed by these customers and there have been a lot of threats and risk which has been to be managed by the CIOs. So Happiest Minds, digital resilient technology, right? Where we bring in the data compliance as a service, we were able to manage the resilience much ahead of other competitors in the market. We were able to bring in our business continuity processes from day one where we were able to deliver our services without any interruption to the services what we were delivering to our customers. So that is where the digital resilience with business continuity process enabled was very helpful for us to enable our customers continue their business without any interruptions during pandemics. So, I mean, some of the challenges that customers tell me, I mean, obviously I had to figure out how to get laptops to remote workers and that whole remote work from home pivot, figure out how to secure the end points and those were kind of looking back there kind of table stakes. But, and it sounds like you've got a, I mean, digital business means a data business putting data at the core, I like to say. But so I wonder if you could talk a little bit more about maybe the philosophy you have toward digital resilience and the specific approach you take with clients. Absolutely, Dave. See any organization data becomes the key. And that's for the first step is to identify the critical data, right? So this is a six step process what we follow in happiest minds. First of all, we take stock of the current state. Though the customers think that they have a clear visibility of their data. However, we do more of an assessment from an external point of view and see how critical their data is. Then we help the customers to strategize that, right? The most important thing is to identify the most important critical asset, data being the most critical asset for any organization. Identification of the data is a key for the customers. Then we help in building a viable operating model to ensure these identified critical assets are secured and monitored duly so that they are consumable as well as protected from external threats. Then as a fourth step, we try to bring in awareness to the people we train them at all levels in the organization that is a key for people to understand the importance of the digital assets. And then as a fifth step, we work as a backup plan in terms of bringing in a very comprehensive and a holistic testing approach on people process as well as in technology to see how the organization can withstand during a crisis time. And finally, we do a continuous governance of these data which is a key, right? It is not just a one step process. We set up the environment, we do the initial analysis and set up the strategy and continuously govern these data to ensure that they are not only managed well, secured as well as they also have to meet the compliance requirements of the organizations, right? That is where we help organizations to secure and meet the regulations of the organizations as per the privacy laws. So this is a constant process. It's not a one-time effort. We do a constant process because every organization goes towards the digital journey and they have to face all these as part of the evolving environment on digital journey. And that's where they should be kept ready in terms of recovering, rebounding and moving forward if things goes wrong. So let's stick on that for a minute. And then I want to bring Yusuf into the conversation. So you mentioned compliance and governance when you're a digital business here, as I say, you're a data business. So that brings up issues, data sovereignty, there's governance, there's compliance, there's things like right to be forgotten, there's data privacy, so many things. These were often kind of afterthoughts for businesses that bolted on, if you will. I know a lot of executives are very much concerned that these are built in and it's not a one-shot deal. So do you have solutions around compliance and governance? Can you deliver that as a service? Maybe you could talk about some of the specifics there. So some of, we have offered multiple services to our customers on digital residents. And one of the key services is the data compliance as a service. Here, we help organizations to map the key data against the data compliance requirements. Some of the features includes in terms of the continuous discovery of data, right? Because organizations keep adding on data when they move more digital. And helping in understanding the actual data in terms of the residents of the data. It could be an heterogeneous data sources, it could be on databases, or it could be even on the data lakes, or it could be even on on-premise or on the cloud environment. So identifying the data across the various heterogeneous environment is a very key feature of our solution. Once we identify and classify the sensitive data, the data privacy regulations and the prevailing laws have to be mapped based on the business rules. So we define those rules and help map those data so that organizations know how critical their digital assets are. Then we work on a continuous monitoring of data for anomalies because that's one of the key features of the solution, which needs to be implemented on the day-to-day operational basis. So we help in monitoring those anomalies of data for data quality management on an ongoing basis. And finally, we also bring in the automated data governance where we can manage the sensitive data policies and their data relationships in terms of mapping and manage their business rules. And we drive remunerations and also suggest appropriate actions to the customers to take on those specific datasets. Great, thank you. Yusuf, thanks for being patient. I want to bring in Ayotaho to the discussion and understand where your customers and happiest minds can leverage your data automation capability that you and I have talked about in the past. And I mean, it'd be great if you had an example as well, but maybe you could pick it up from there. Sure, I mean, at a high level, as Suresh has actually articulated really, Ayotaho delivers business agility. So that's by accelerating the time to operationalize data, automating, putting in place controls, and ultimately helping put in place digital resilience. I mean, if we step back a little bit in time, traditional resilience in relation to data often meant manually making multiple copies of the same data. So you'd have a DBA, they would copy the data to various different places, and then business users would access it in those functional silos. And of course, what happened was you ended up with lots of different copies of the same data around the enterprise. Very inefficient. And of course, ultimately increases your risk profile, your risk of a data breach. It's very hard to know where everything is. And I realized that expression we used, David, the idea of the forced march to digital. So with enterprises that are going on this forced march, what they're finding is they don't have a single version of the truth. And almost nobody has an accurate view of where their critical data is. Then you have containers, and with containers, that enables a big leap forward. So you can break applications down into microservices, updates are available via APIs. And so you don't have the same need to build and to manage multiple copies of the data. So you have an opportunity to just have a single version of the truth. Then your challenge is, how do you deal with these large legacy data states that the service has been referring to where you have to consolidate? And that's really where IOTAO comes in. We massively accelerate that process of putting this single version of the truth into place. So by automatically discovering the data, discovering what's duplicate, what's redundant, that means you can consolidate it down to a single trusted version much more quickly. We've seen many customers who've tried to do this manually and it's literally taken years using manual methods to cover even a small percentage of their IT estate. With IOTAO you can do it really very quickly and you can have tangible results within weeks and months. And then you can apply controls to the data based on context. So who's the user? What's the content? What's the use case? Things like data quality validations or access permissions. And then once you've done that, your applications and your enterprise are much more secure, much more resilient as a result. You've got to do these things whilst retaining agility though. So coming full circle, this is where the partnership with Happiest Minds really comes in as well. You've got to be agile, you've got to have controls and you've got to drive towards the business outcomes. And it's doing those three things together that really deliver for the customer. Thank you, Yousef. I mean, you and I in previous episodes we've looked in detail at the business case. You were just talking about the manual labor involved. We know that you can't scale. But also there's that compression of time to get to the next step in terms of ultimately getting to the outcome. And we've talked to a number of customers in theCUBE and the conclusion is it's really consistent that if you can accelerate the time to value that's the key driver, reducing complexity, automating and getting to insights faster. That's where you see telephone numbers in terms of business impact. So my question is where should customers start? I mean, how can they take advantage of some of these opportunities that we've discussed today? Well, we've tried to make that easy for customers. So with Iotaho and Happiest Minds you can very quickly do what we call a data health check. And this is a two to three week process to really quickly start to understand and deliver value from your data. So Iotaho deploys into the customer environment. Data doesn't go anywhere. We would look at a few data sources and a sample of data and we can very rapidly demonstrate how data discovery, data cataloging and understanding duplicate data and redundant data can be done using machine learning and how those problems can be solved. And so what we tend to find is that we can very quickly as I say in a matter of a few weeks show a customer how they can get to a more resilient outcome and then how they can scale that up, take it into production and then really understand their data estate better and build resilience into the enterprise. Excellent, there you have it. We'll leave it right there guys. Great conversation. Thanks so much for coming in the program. Best of luck to you and the partnership. Be well. Thank you, David. Suresh. Thank you, Lisa. And thank you for watching everybody. This is Dave Vellante for theCUBE in our ongoing series on data automation with Ayo Tahoe. Digital resilience, automated. Compliance, privacy and security for your multi-cloud. Congratulations, you're on the journey. You have successfully transformed your organization by moving to a cloud-based platform to ensure business continuity in these challenging times. But as you scale your digital activities there is an inevitable influx of users that help paces traditional methods of cybersecurity, exposing your data to underlying threats and making your company susceptible to ever greater risk. To become digitally resilient, have you applied controls to your data continuously throughout the data lifecycle? What are you doing to keep your customer and supplier data private and secure? Ayo Tahoe's automated sensitive data discovery is pre-programmed with over 300 existing policies that meet government mandated risk and compliance standards. These automate the process of applying policies and controls to your data. Our algorithm-driven recommendation engine alerts you to risk exposure at the data level and suggests the appropriate next steps to remain compliant and ensure sensitive data is secure. Unsure about where your organization stands in terms of digital resilience? Sign up for our minimal cost, commitment-free data health check. Let us run our sensitive data discovery on key unmapped data silos and sources to give you a clear understanding of what's in your environment. Book time with an Ayo Tahoe engineer now. Okay, now we're going to go into the demo and we want to get a better understanding of how you can leverage OpenShift and Ayo Tahoe to facilitate faster application deployment. Let me pass the mic to Sabita. Take it away. Thanks, Dave. Happy to be here again. Guys, as Dave mentioned, my name's Sabita Davis. I'm the Enterprise Account Executive here at Ayo Tahoe. So today we just want to give you guys a general overview of how we're using OpenShift. Yeah, hey, I'm Noah, Ayo Tahoe's data operations engineer working with OpenShift and I've been learning the ins and outs of OpenShift for like the past few months and I'm here to share what I've learned. Okay, so before we begin, I'm sure everybody wants to know, Noah, what are the benefits of using OpenShift? Well, there's five that I can think of, faster time to operation, simplicity, automation, control and digital resilience. Okay, so that's really interesting because there's an exact same benefits that we at Ayo Tahoe deliver to our customers. But let's start with faster time to operation. By running Ayo Tahoe on OpenShift, is it faster than let's say using Kubernetes and other platforms? Our objective at Ayo Tahoe is to be accessible across multiple cloud platforms, right? And so by hosting our application in containers, we're able to achieve this. So to answer your question, it's faster to create end user application images using container tools like Kubernetes with OpenShift as compared to like Kubernetes with Docker, Cryo or container D. Okay, so we got a bit technical there. Can you explain that in a bit more detail? Yeah, there's a bit of vocabulary involved. So basically containers are used in developing things like databases, web servers or applications such as Ayo Tahoe. What's great about containers is that they split the workloads. So developers can select their libraries without breaking anything and sys admins can update the hosts without interrupting the programmers. Now OpenShift works hand in hand with Kubernetes to provide a way to build those containers for applications. Okay, got it. So basically containers make life easier for developers and system admins. So how does OpenShift differ from other platforms? Well, this kind of leads into the second benefit I wanna talk about, which is simplicity. Basically there's a lot of steps involved with when using Kubernetes with Docker, but OpenShift simplifies this with their source to image process that takes the source code and turns it into a container image. But that's not all. OpenShift has a lot of automation and features that simplify working with containers, an important one being its web console. Here I've set up a light version of OpenShift called code ready containers. And I was able to set up our application right from the web console. And I was able to set up this entire thing in Windows, Mac and Linux. So it's environment agnostic in that sense. Okay, so I think I've seen the top left that this is a developer's view. What would a system's admin view look like? It's a good question. So here's the administrator view. And this kind of ties into the benefit of control. This view gives insights into each one of the applications and containers that are running and you can make changes without affecting deployment. And you can also within this view set up each layer of security. And there's multiple that you can prop up but I haven't fully messed around with it because with my look, I'd probably lock myself out. So that seems pretty secure. Is there a single point security such as your user login or are there multiple layers of security? Yeah, there are multiple layers of security. There's your user login, security groups and general role-based access controls. But there's also a ton of layers of security surrounding like the containers themselves. For the sake of time, I won't get too far into it. Okay, so you mentioned simplicity and time to operation as being two of the benefits. You also briefly mentioned automation and as you know, automation is the backbone of our platform here at IOTAHO. So that certainly grabbed my attention. Can you go a bit more in depth in terms of automation? OpenShift provides extensive automation that speeds up that time to operation, right? So the latest versions of OpenShift come with a built-in cryo container engine which basically means that you get to skip that container engine installation step and you don't have to like log in to each individual container host and configure networking, configure the registry servers, storage, et cetera. So I'd say it automates the more boring kind of tedious processes. Okay, so I see the IOTAHO template there. What does it allow me to do? In terms of automation and application development. So we've created an OpenShift template which contains our application. This allows developers to instantly like set up our product within that template. So Noah, last question. Speaking of vocabulary you mentioned earlier, digital resilience is a term we're hearing, especially in the banking and finance world. It seems from what you described, industries like banking and finance would be more resilient using OpenShift, correct? Yeah, in terms of digital resilience, OpenShift will give you better control over the consumption of resources each container is using. In addition, the benefit of containers is that, like I mentioned earlier, sys admins can troubleshoot the servers without like bringing down the application. And if the application does go down, it's easy to bring it back up using templates and like the other automation features that OpenShift provides. Okay, so thanks so much, Noah. So any final thoughts you wanna share? Yeah, I just wanna give a quick recap of like the five benefits that you gain by using OpenShift. The five are time to operation, automation, control, security, and simplicity. You can deploy applications faster. You can simplify the workload. You can automate a lot of the otherwise tedious processes. You can maintain full control over your workflow and you can assert digital resilience within your environment. So guys, thanks for that. Appreciate the demo. I wonder, you guys have been talking about the combination of IO Tahoe and Red Hat. Can you tie that in, Sabita, to digital resilience specifically? Yeah, sure, Dave. So why don't we speak to the benefits of security and control in terms of digital resilience? At IO Tahoe, we automate detection and apply controls at the data level. So this would provide for more enhanced security. Okay, but so if you were to try to do all these things manually, I mean, what does that do? How much time can I compress? What's the time to value? So with our latest versions of IO Tahoe, we're taking advantage of faster deployment time associated with containerization and Kubernetes. So this kind of speeds up the time it takes for customers to start using our software as they be able to quickly spin up IO Tahoe in their own on-premise environment or otherwise in their own cloud environment like including AWS, Azure, Oracle GCP and IBM Cloud. Our quick start templates allow flexibility deploying to multi-cloud environments, all just using like a few clicks. Okay, so now I'll just quickly add. So what we've done in IO Tahoe here is we've really moved our customers away from the whole idea of needing a team of engineers to apply controls to data as compared to other manually driven workflows. So with templates, automation, pre-built policies and data controls, one person can be fully operational within a few hours and achieve results straight out of the box on any cloud. Yeah, we've been talking about this theme of abstracting the complexity that's really what we're seeing as a major trend in this coming decade. Okay, great. Thanks, Sabita, Noah. How can people get more information or if they have any follow-up questions, where should they go? Yeah, sure, Dave. I mean, if you guys are interested in learning more, you know, reach out to us at info at IO Tahoe.com to speak with one of our sales engineers. I mean, we'd love to hear from you. So book a meeting as soon as you can. All right, thanks guys. Keep it right there for more cube content with IO Tahoe.