 Welcome to the next edition of the open shift operator hours show. So today we're lucky enough to have with us, uh, Trilio, and we are going to be talking with Morali Balcha, who's the founder and chief technology officer of the company. As well as Prashanto coach of our, who's the director of product. So welcome gentlemen, welcome to this TV show today. Thank you, Michael and a founder's story. I believe we were going to talk about a founder story. What can you tell us about that? Thanks, Mike for the introduction. Hey, it's, you know, I'm going to make it. I'm last night. Cyrus champion, I decided we were going to make a round of t shirts for everybody. And 1 of them is going to say. Can you see my screen? And then the other shirt is going to say, hey, you're on mute. And so everybody can be wearing these and if someone is that we can all just start pointing at our shirts, you know, one of the best sellers. Yeah, so welcome welcome Morali, a founder's story. You found a Trilio. Tell us about that. Yeah, yeah. So I found a Trilio in 2013. It almost feels like eons back. So before that, I was at EMC. I spent almost 15, 16 years at EMC. EMC, we know it's a pioneer in storage area networks. They single handedly invented the stories. I was part of that journey. I think I was fortunate enough to basically get to work on almost every storage area technologies that is there. So when EMC bought VMware, I was leading the integration of virtualization with EMC technologies. And again, back in 2012, 2011, when cloud was still at the inception, right? And Amazon AWS was probably the only public cloud out there. And the shape and nature of the cloud is still up in the air to define. So every vendor was trying to figure out what they can do in the cloud and how they can play in the cloud. We are looking at various technologies that EMC has and how they play in the cloud paradigm. It's looking at the data protection. And then suddenly realized like most of these products, they were there since 1980s when tape was the predominant backup media. And they haven't really evolved much since then. I think they still have client server architecture at the backup, lots of files. And it is administered by one central administration. And so it wasn't really meant for cloud. So cloud is a different paradigm. And it was upending everything that we know about the IT. And we thought like we need a completely new thinking around the data production. And there's a reason led me to basically come and found Trilio in 2013. And what's in the name? I mean, we have lots of partners. I have probably 1200 software vendors that I work with on a daily, weekly basis. I do marketing activities with our partners once they've certified their software. And we're going to get to that in a little minute about your support for OpenShift. But names of bugs, names of rockets, what's white Trilio? Yeah, white Trilio. So I think what I realized was coming up with the name is one of the one of the challenging things. We tried various things. And then we realized, well, we have three founders. We kind of came together with some passion. It takes kind of a line server to basically come and start a startup company. So we tried three lines. It's kind of rhymed with Trilio. So we went with that. You know, and it's even way more than that. I mean, I've been actually today's my 20th anniversary at Red Hat. I just found out I had all these people, I had all these people like messaging me on LinkedIn this morning and I couldn't figure out why. But having been here at Red Hat for 20 years now, which I learned today, I've dealt a lot with brand and marketing over the years around logos and new product creation and so forth. And, you know, when you make a name, it has a big ripple effect on everything else from a brand perspective about the naming of the products and all the way down to like trying to put a logo on a baseball cap. You know, if your logo is like this, it makes it really challenging to do branding with other companies. So I just thought I'd bring that up. Anyway, so founding the company was smooth sailing. You got a little bootstrap capital and then everything just went perfectly as planned or what? Absolutely. No hiccups, nothing. It was a straight drag race. No, it wasn't that. You know, I think we, you know, every startup company has some challenges. You know, hitting that mark with the business case, right? Some companies can do much better. I think partly they are lucky and partly they probably read the market very well. But a lot of other companies, I think they have some winding path, like some trial and error methods until they get the act together, right? We know we have a big big opportunity here in the cloud. Cloud is going to be the next biggest thing back in 2012-13, right? But you know, the hype that we had at the time when we left the company and started and kind of exploring and talking to a lot of companies, it didn't really match with them, with the reality, right? Every company wants to do something in the cloud. They want to basically reach, organize their IT into the cloud paradigm, but reality is different, right? They still have the older business processes in place. They use the old technologies. So it was a bit of a surprise for us, like, you know, the hype versus reality and we, in a few years, it's a bit of a struggle to find these customers who are looking for a product in the backup space. And on top of it, especially with the backup, backup is not a day one problem, right? They need to first deploy the applications in the production. They need to run in the production for a few days before they realize they need a backup. So it's a day two problem. So that also exasperated our ability to succeed in the marketplace. But it almost took three years for us to land our first customer. And truly a vault for data protection. It's a cloud native. It's built on Kubernetes for Kubernetes. How is that? Why does the world need another data protection solution? I mean, I've been around, you know, as long as dirt and, you know, back in the day when I was a digital equipment corporation, there were great backups, pieces of software, and, you know, we all know the names of them, but how come people just don't use something that's been around and has been well manicured for the last 20 years? Why, you know, why do they need something new? Yeah, well, it's IT, right? It requires constant innovation. I think, you know, most of the backup systems that they developed, they are very much a detachment from the platform they are protecting, right? It was okay in the Windows Linux bunch of Linux servers. But I think when you look at the cloud, they layered some kind of management layer on top of this form of servers, right? They have the multi-tenancy, they have the self-service aspect of it and the scale-out architecture of it. So the world paradigm is that you have a backup administrator who knows every application out there and knows how to backup and restore. That doesn't really work with the cloud. Cloud is more about self-provisioning. So if I want to provision 10 applications, I'll go and provision, right? I'll manage everything. There is no central administrator who basically helped me provision the resources that I needed for my application deployment. It just happens, right? And the backup has to flow into the same paradigm, right? There is no central administrator. And so that, we always put the platform first, right? We want to make sure that this functionality flows with the platform without adding additional friction with the deployment for the use and management, right? It looks like very native to the platform that you are working on. So that's been our mantra or like foundational piece or theme for our company, right? We started that with OpenStack. We are the first backup as a service, true backup as a service that is self-service multi-tenant into the OpenStack cloud. And then when we are looking at the cloud nature, the Kubernetes, we still want to stick to that same paradigm. So we could take whatever we developed for OpenStack, somehow retrofit into the Kubernetes world, but we didn't do that because, again, the same thing, right? Platform first. So if someone is using Kube-CTL to manage their workloads, provisioning and managing them, they should still use the Kube-CTL to manage their backups. That is our theme. And that is the reason why we went and developed this from ground up, right? It looks natively built and it provides all the ease of use for the end customer. Okay. You know, you mentioned OpenStack. I just had flashbacks. I mean, like I said, I've been here for a little while. I remember when OpenStack was the shiny object and there was the, you know, they started the, what was it called, the OpenStack summits and they were all up in Portland. I remember going up to Portland and the amount of the amount of hype and excitement and energy on those show floors when you were walking around there of everyone, you know, this is the next big thing for computing. This is the next paradigm shift of, you know, distributed computing. And then six years later, it just seems it's kind of been more niche into like the telco type space. And then all of a sudden Kubernetes is here and it's the same thing. And now you go to, you know, you go to KubeCon. Well, I guess we didn't get to go to the one in Amsterdam, right? But I mean, it was going to be, they were really ratcheting up for that one. So I don't think we're going to see Kubernetes going away anytime soon. I think it's here to stay and, you know, is the key component for orchestration in a hybrid world. When you said you got your first customer, what was it that they said, okay, you know, we want to, we want to go with you? Like, why did they select Trilio as, you know, it's hard. I mean, I remember being at Red Hat when we first started here, I started in 2001, I think. And, you know, going in there, you had to sell the customers, not on your company or on your product, but we had to sell open source and that we weren't wearing skateboard. We weren't have blue hair and skateboards. And, you know, we had a lot of rep turnover back then because, you know, we were trying to convince people that open source was a good model, as opposed to why they should be buying our product. So how was it when you got your first deal? What were the challenges that you had with them? Right. That was a very interesting story. So we were given the opportunity to present our solution with one of the biggest telecommunications in US. There was a three day backup there and all the vendors are presenting. And we were the last one to go on the third day, on the last day to present our solution. We don't have any brand recognition, nothing, right? So we did go there and we did present as a natively built OpenStack solution that adhered to all the OpenStack principles, the scale and the multi-tenancy and integration with Keystone and everything. So the guy stopped my presentation, two slides into it and then said, you're in. What? He didn't even let you finish with Death by PowerPoint? No, no. And he basically turned to his rep and said, this is exactly what I've been asking for. And all the vendors are presenting how good they are with the virtualization and doing it for OpenStack is going to be nothing, right? They can do it. But no one presented the vision that we have. And frankly, we didn't have a GA product at the time, we were at the beta, but they are willing to risk, take a risk with us saying that because our vision is aligned with what they wanted to do. What's the saying, fake it till you make it? I think. Well, the vision is important to it where we are taking the company and that perfectly aligned with what they want to do with the OpenStack, rolling out the cloud for the entire enterprise. Okay. And I know that we are going to have a technical demonstration. Prashanto is going to be putting that on. Before we get it in, before we got into that, I did want to just talk about your company and your operator for a second. We invest a lot of time and energy working with software companies like yourselves. You folks have a Red Hat certified container that's certified and fully supportable for use with Red Hat products. Do you have an operator as well? Yeah, our offering is fully certified and available in the operator hub. And I think we've been doing it since day one, right? Starting with V1 product. And we regularly update our operator in the operator hub. So the latest version that is available is 202, which was released last week. Okay, cool. And if you're not there already, I'm also involved with the marketing for the Red Hat Marketplace operated by IBM. If you're not connected with them already, we'd love to have you talk with them about having your backup solutions available in the marketplace. I don't know if it's there yet or if you're having that conversation. We know we have it in the IBM marketplace. Prashanto is a Red Hat marketplace. Yeah, we are in the Red Hat marketplace and the IBM marketplace already might have it. So we're going to all those pieces and make sure to be there. Okay, great. So why should cloud architects, backup admins and developers look at data protection? I mean, this is obviously a question that we have scripted here to get the dialogue going. To me, it sounds pretty straightforward that of course cloud architects, backup admins and developers should be concerned about Kubernetes, but in your own words. Yeah, I think this is where we need to look at the backup and recovery beyond just backing up few files and when it's needed, you recover those things. We want to elevate the conversation of the backup in a bigger scale, especially in the cloud native. The Kubernetes being the cloud platform where you can write your application once and then run anywhere, right? That paradigm. You can do that with DevOps. So you can create your application, orchestrate your application and then spin as many instances as you want and any cloud you want it. But the challenge is the data. So if you have a data heavy application that is running on the Kubernetes and for various operations reasons, you want to migrate the application to a different cloud, right? Especially in a multi-cloud scenario that will happen. Every business should be looking at how they can have this multi-cloud strategy. So the challenge is how you can move this application between the cloud transparently. All right. So backup can capture the application and then record the application. So we want to elevate the discussion and make it almost the must-have the functionality in a multi-cloud scenario. So that way you can backup the applications, you can recover the applications on a different cloud for DR purposes, for other purposes. You can migrate the applications for cost savings, right? For compliance reasons. So you can realize a lot more use cases if you make the backup as central theme for your multi-cloud strategy, right? So as you can see, it plays a much bigger role in your IT. And I would like to throw out there that we are live right now. So if you're like, hello Facebook, hello YouTube, hello Twitch. If you folks are watching on any of those live streams, if you post any questions into the chat, so over there our producers will pick them up and we'll get them over here and we'll be sure to address any and all questions that you folks may have. Prashanto, a little bit about yourself. You're the director of product. What does that mean? What do you do? How terrific are you? Were you the founder of Google before coming here to Trilio? He's a rock star. Moonlee's being kind. But yeah, a little bit about myself. I'm the director of product for the Kubernetes offering at Trilio. So I deal with all things Kubernetes in general. Looking at it right from the business angle, looking at the market drivers, looking at requirement generation, helping with go-to-market strategies, engineering, architectural discussions, everything. So my focus is in terms of building, developing and selling the product. And that's what my day-to-day job entails. Before this, I was at bigger corporations where I was running engineering as well as product management teams. And when I learned about Trilio Vault for Kubernetes and the venture that Moonlee was getting into, I was immediately attracted to it and definitely wanted to be part of it. And after a few talking sessions, I was happy to be at Trilio and be editing this next-gen technology. Okay. What about some war stories? We generally like to hear some aha moments or some type of interesting war story from deals lost, deals won, things that you want to share with our viewers. And then that could be either for yourself or Murali. You happen to have this to have the stage right now. I would say from aha moments, when we went down, initially, how we built our product was more in terms of applications that were being done within Kubernetes. So people were running operator-based applications, help-based applications, and they were deploying applications based on labels and so on. So our focus was in terms of protecting a Helm chart, specifically an operator piece of the application as well as the application controlled by the operator and so on. And what happened is eventually talking with customers, what we realized is there is a good population of customers because of the lack of proper definition of an application within Kubernetes. They were using namespaces as the boundaries or as their definition or their scope for an application. There was a one-to-one mapping between a namespace and an application. So immediately we realized that and that was one of the biggest value-as that we added into the product was a namespace level backup and recovery feature, which is very well-appreciated by customers today and is, along with the Helm and operator pieces, they end up using the namespace backup recovery protocol and the procedure a lot more as well. So Helm chart versus say like a level four operator, I know internally at Red Hat we have a large technical team that works with software vendors to run their operator through our test certification suite and so it's Red Hat certified. I'm very familiar that like building a Helm chart is easy but making an operator is more challenging. Can you talk about that? Yeah, I mean with an operator it's almost not almost, it's pretty much developing two applications together. You have the operator based application and then the application that the operator is controlling and managing in a way. So the way we have looked at the look at the landscape and we realized that you know in order to provide customers the best feel and approach for managing their Kubernetes environment, simplifying the Kubernetes infrastructure for them, operators was the best way forward for us. So what we've done is we have adhered and adopted the OLM framework for operators from Red Hat and supported a operator for open shift directly to OLM and then via Helm we have an upstream operator as well which can be deployed in upstream environments. Okay, maybe Morali, let's go back to you. What are the required principles behind a cloud native architecture for data protection and management? I think software first. So one of the challenges is not to rely on any proprietary stuff, whether it's a backup format because multi-cloud is the, must be the main focus area. So essentially what it means is your, if your backup functionality is limiting you to be mobile with your applications, ability to basically move these applications from one cloud to other, then you should stay away from that backup application. A lot of backup vendors, they go with very proprietary format because the storage savings that they basically created an IP around it, they get it only when you save their backups on their appliance, but you can't move that appliance into the cloud. So we always start with, we never ever want to develop any proprietary technologies that basically prohibit our customers to be completely nimble in a multi-cloud environment. So the software-only solution, it doesn't rely on any of the proprietary technologies and it uses standard Linux-based standards and tools for creating backups, whether it's a space savings or for all the functionality that people expect from backup vendors. But on top of it, we are very nimble. So you can take an application that is backed up from say upstream Kubernetes and then you are running OpenShift on AWS. You should be able to recover your application without losing anything from our backups. So that is the flexibility that we provide. And of course we talked about the native integration, the part of the platform itself. So that also gives us a lot more flexibility because we can and we do discover all the applications running in the Kubernetes and provide a one snapshot view to the end user, what applications are protected, what applications are not protected, kind of gives a risk profile for the end user, like if there is an exposure for the applications, because we have a good integration with the platform. So we can provide that kind of analytics to the end user. You can see that in Plesantos demo when you do the demo. Okay, fair enough. So how's business? Very good. I know that when this whole challenging time thing started, we were like, oh my gosh, I haven't been on an airplane in a year. Not that this is necessarily tied to our business, but I was wondering about our account managers, the lack of traveling on site and what that would do to slowing down sales. But I think everyone's adopted to working virtually these days. But outside of just these challenging times, how is your business trending as far as adoption and growth and so forth? Well, I think we've been doing very good. You know, of course, like any other company, we need to adapt change. And I can talk about just in our marketing, like he had so many plans laid out for 2020. And then he has to change them to adapt to the COVID times. We did not miss a bit. I think we did very well in 2020 in terms of a number of new customers that we signed up. And the number of products and the releases that we rolled out, even though everyone is working remote. I think we did very well. That's good. Okay, so just let's talk about the key use cases that the customers are looking to solve. And then I think we'll switch and get more into an actual hands-on technical demo of it. But can you talk about the customer use cases? Yeah, I think we focus on four use cases with our solution. They look almost similar, but of course backup and recovery. If something bad happens, users should have a way to go in backup and recovery. And in a multi-cloud environment, our focus area will be multi-cloud because that is going to be the standard. So when you say multi-cloud, it doesn't have to be between AWS and Azure or VMware and Azure. It could be like multiple Kubernetes clusters that are running on-prem. That can be multi-cloud, too, because there are two different platforms. And you need to have something that tie all these clusters together to provide that unified view and then provide these interesting use cases. And so application mobility and the migration is on the big use case that we are solving. We address. And the other one is the disaster recovery. Disaster recovery for the applications. If something bad happens on the production side, how quickly you can recover that application onto the remote side? So these are the four use cases that we focus on with our solution. Okay. Toronto, how about over to you? I think you prepared something here for us today? Yes. Yes, definitely. What I'll do is I'll share my screen and you can let me know once you can see it. So what's the, what's the, before you get started, what's the over under, Morali, what's the over under on whether or not his demo is going to hang throughout some bugs, which he fixes uncomfortably in front of everybody or goes flawlessly because it's been pre reported. I think it goes flawlessly. I know and I know technology. I did. 22 years ago, I was working for this company called Mission Critical Linux and we, it was before red hat bought us. And I was flying around the world with Intel because we had a really interesting workload that ran on itanium. And remember the itanium system is these ginormous, you know, the predecessor to x86 64. And I was on stage and this is the reason why I asked Prashanta was I'm on stage with this with fellow Will Swope, I think was his name. And we're just about getting ready to go to do the demo. There was like two big racks and then there was a client that was going to run an app and we're going to fail it over between the nodes. And it had a memory leak in it. And so because I staged it behind behind scenes while while we're waiting for me to go on the memory just started leaking, leaking, leaking, leaking, leaking. And so they announced me on stage. I go up there and the client is just like completely dead. And it was the most embarrassing demo I've ever had to do in my entire life. And after that, after that, and even this was like big was like Linux world, right? It was like 15,000 people in the audience or something. Wow. So after that, I can't everything. I just, I don't think we have 15,000 people here today, but sorry. A little less riskier on the open ship. Live demos are always risky. Hopefully the demo votes are with me. So let's see how that goes. I'm assuming everyone can see my screen. You can see the open shift console. Yes. And you see this is exactly why we're having these t-shirts made and we are going to get some shipped out to you. We make no joke. Cyrus and I were designing the last night. We're going to have, we're going to have 1200 made that say, can you see my screen and 1200 made that say you're on mute and we're going to, you guys can be our first recipient and you'll get one of each and then you can wear them during all of your zoom meetings and you can just point, hey, you're on mute or can you see my screen. And the same thing goes for anyone who's watching us right now on Facebook or YouTube. If anyone wants to get one of these limited edition, can you see my screen red hat t-shirts? Send me an email. It's just wait at redhat.com and we'll hook you up over to you for Shonto. Great. Thank you, Michael. So what we see in front of us right now is the OpenShift console. This is a OpenShift 4.6 environment which is running in AWS. And what I'm going to do is I'm going to look at the operator hub over here. Operator hub, as we all know, is like the marketplace for deploying applications into an OpenShift environment. Search for Trilio. Right now we are available in the marketplace and you see a couple of other tiles. One of the tiles is the official 2.0.2 version. And the other tile is a custom version that I was using for testing. But this is the custom version. It says 0.507 version and this is the one which is actually installed. As you can see from a capability perspective, we support full lifecycle upgrades, patching everything for the operator itself. It is a certified product. We provide additional information around how to learn more about the product that are live labs and so on. Licensing information from a licensing perspective. There is a 30-day free trial or a 10-note basic addition that we provide for free. And folks can get jump-started with the technology very, very quickly. So, overall, this takes about three clicks. You go into the operator hub, search for Trilio, click on the tile, click on install. And within a matter of five minutes, you have Trilio up and running within your model. Since it's already installed over here, we'll go and take a look at how it all appears. Now, one of the things when we were talking about the helm and the operator piece, we could have just developed everything as a helm chart itself. But what we've done is we've followed an operator model because we realized that we can have very much of a service-oriented architecture by going via the operator model. And what we've done over here is, if you see, each of these tiles represent a custom resource definition for Trilio. So, we have a custom resource definition for a license, for creating the backup repository, which we call the target, for creating policies, scheduling policies, retention policies, for supporting hooks. Hooks are injection points within an application that you would run to wires the application and get it to a consistent state before taking a backup. We have a CRD known as the backup plan, which is basically the protection scope that a user defines for what they want to protect. It can be just a helm chart, it can be a helm chart and an operator-based application, it can be a namespace, or it can be a combination of any of those items. So, whatever the user wants to protect, he can define that within this entity or within this resource known as the backup plan. And then finally, we have the backup and the restore operations, which basically enable the backup and restore activities. So, overall, if you see, what we've done is, we've modularly built the product based on the operations and based on the activities that end users would want to do. Now, the value in doing it this way is the fact that because each of these operations is a mobile operation, you can apply, Kubernetes are back, which we directly plug into on each of these items. For example, Michael, let's assume you were the administrator and you had control over all the storage resources. Now, you would not want any of the developers and the organizations to create targets. So, you can specifically make sure that the target custom resource definition is available only to you for use. So, you can create the targets, but all the other folks within your organization can leverage that target for use for storing their backups. So, and the entire product, whether it's hopes, policies, backup plan, backup restore, any of these activities can be granularly segmented based on a user's role and scope. And any questions on that, Michael? Based on how we've architected the product and how it looks and appears as an operator over here? No, none at all. I think it looks really nice. Awesome. So, right now, in this environment, I just applied a license here yesterday. It's the free license that we are using. It's all active. From a target perspective, we have two targets available over here. And similarly from a policy hook, there are a few items that are created everywhere. And I'll go through it again. But overall, the way a user would want to use these different tabs over here is by clicking on create target. They have the option of, you know, either using a YAML view, we have the entire instructions on how to use the YAML on the right-hand side panel. Or if they do not want to use the YAML view, they can use a form-based approach, which OpenShift provides, and we directly plug into it. So, we have addressed and adhered to the framework for these dynamic forms. And because of that, our entire YAML specification files are available as a click-driven or, you know, a simple form-based approach. If users are not familiar with the YAML approach or do like the YAML approach, I know developers would end up using the YAML approach. But in case you want to use the form view, we have that as well. Now, as I said, we have a few targets created over here. From a policy perspective, we have some policies created based on retention. You don't have any hooks for the backup plan. As you see, there are a bunch of different backup plans that are running in this environment associated with different protection scopes. And just to kind of show you how that works, we just call this OpenShift TV, backup namespace, name formulas, namespace that we had on the store map. And from a backup config perspective, we're going to use one of the topics that I already have created. So, once you provide all this information, you go ahead and click create. This will go ahead and start creating the backup plan. And once it validates, the controllers will validate that whatever I've asked for it to backup is actually correct. And we'll put it into an available state. Beyond that, a user would come into the backup menu. They create backup. Let's call this OpenShift TV and give them the backup. I'm not giving any labels over here. Backup guys, we do a full backup. And the backup plan name will be OpenShift. There are other information around API version. And we think if you want to provide that, not necessary. And we can go ahead and create this. So what happens is once we trigger this backup, which is right here, OpenShift TV backup, it will go through the Trilio process of, you know, snapping the metadata objects or all the manifest files that comprise of the application. We protect that first. And then we get into the data components of capturing the persistent volumes and so on. So it's a pretty much sequential step-by-step process, which you take for this operation I expected to take about five to six minutes overall. But in the end, it would depend upon, you know, how big your persistent volumes are whenever you're doing a backup. But that's overall, you know, how you end up doing a backup from OpenShift for any of your applications running anywhere within the cluster. So I have a couple of questions if I may. I didn't want to interrupt you right in the middle, but you know, you reference at the top that this is for Kubernetes. So does this work with any flavor of Kubernetes? Like, do you see people using community-supported Kubernetes inside their environments and they're asking you to back up? And given the fast change rate in Kubernetes development by itself, how does your product keep up with all the changes that are coming along? So, yes. So to answer the first part of your question, we have developed the product as a Kubernetes-native solution. So it will work in any distribution that is CNC certified. So that's how we've constructed the product, making it, you know, very seamless for any Kubernetes environment to deploy and use this. And then because we are, you know, leveraging Kubernetes constructs, you know, we are leveraging all the abstractions that Kubernetes provides. Maintenance and upkeep becomes pretty easy for us because, you know, we are using firstly one of the more of the basic level components that Kubernetes offers. So those components do not change as much or really change. And then that is backwards compatibility provided through the Kubernetes system itself that makes the churn on our end very minimal. So from that angle, you know, all the abstractions to hook into the Kubernetes system, the, you know, compatibility, the, all these different pieces help us in, you know, pushing out code really, really fast. So one of the things that we've been doing on average is we've been doing new releases, you know, with cutting edge features, almost about every couple of months. And we've seen things or I've done things in the past, you know, not Kubernetes centric, and I've seen how long driven and, you know, how long those cycles can be. But Kubernetes and all these different newer technologies are changing the way we do things. And we are, you know, while we are building a product for Kubernetes, we ourselves are realizing the benefits that it provides internally for us. Okay. And what about, what about workloads? Are there, are there some apps that Trilio Vault works better with because of the way they're architected? Or what if someone, what if someone's using an app that's not cloud native and they sort of did a forklift, you know, jam into the cloud? Does that pose any problems? Does your backup solution work best or work only for an app that's cloud native? So right now, if your, if your application is a Kubernetes based application, we can protect it via the Kubernetes solution. Obviously, if you have a, you know, make a mode of a traditional VM based solution, then we have our other products around rev and open stack as well that can support that. But each product is specific for those domains. And, you know, future consolidation is something that we have thought of, but it's kind of down the road. Okay. And what about, so I was just, I was just on a call before we were with you, we were talking with, with one of the modern cloud native SQL database vendors out there and they, they're, you know, they're running in a very distributed node manner. What's the magic? How does it, how does it work in, in, in plain people speak maybe for doing, you know, backup or data management when you have these nodes that are distributed, you know, and like, say you take an open shift cluster that runs on public cloud and private cloud and on premise and, you know, is there any special sauce or magic that would, that is required to have Trilio vault be able to address the, the data management needs in such a distributed model? Exactly. So the, the secret sauce or the baby do it is through hooks, you know, we can basically inject commands before a backup happens and, you know, after a backup happens directly within the container itself. So is there, if there is any kind of management activities or three work that is required for us to perform the backup or for the application to get a consistent backup, we end up doing those through hooks. And what we've also done is, if I take you to our word. So if I can interrupt Prashanta. So the other aspect to my, Mike in that, in that is obviously application discovery, right? When you look at this complex application where you have multiple nodes, some of them local, some of them remote. The thing that is tying the entire application is how they're allowed the application, it could be an operator based application or it could be a helm based application, right? So, you know, you need to have the backup policy that operates at that abstraction that the developer intended to like, if they're rolling out a helm based release, the backup need to run at that level, not nodes are not PVs are not containers are the operator. So I think we are the only backup solution out there that can operate at that operator or a helm level where we auto discover all the components of the application, including the PVs and the containers and secrets and the config. And we can, we can back up the application as they are laid out as it is laid out. And, and I'll say if the application has basically grown into more number of PVs, we automatically discover that and the next backup will include all those things. We preserve that application nature. And these are the hooks are essentially provide the application consistent backups for you. Do you need to sign off on that? Yes, yes, only as the higher authority on this. But, but what I wanted to show you was, you know, we have, you know, tested validated and we do support, you know, multiple applications that have these distributed nature like the Sandra and so on. We provide the backup plans and the hooks that we've tested with, you know, as a hooks kind of show your, what is the hook you want to execute, which is the form that you want to execute it against. Is there any kind of specific container you want to run this within? So, you know, you have a lot of granular control to make sure that you can talk to the application in the right fashion to get it into a state that you can perform the perfect backup or the consistent backup that you need, which would be perfect. So you, in your hooks there, you specifically have hooks for Cassandra, hooks for Mongo, hooks for MariahDB. Do those come already or do those have to be preconfigured? You know, do they have to be created by the end users when they first get this going? And the reason why I ask is I'm more concerned about transactional databases like SQL nature, like, like a, like a Yugabyte or faunaDB, I think they just added SQL functionality or what's the other, like Cockroach, CockroachDB. Those are the ones that I see are going to start displacing transactional databases traditionally from like an Oracle or a DB2. Correct. So, yes, one of the things, we provide these hook examples as, you know, templates for customers to leverage within their own environment. Obviously, they can tweak them further to match it exactly to how they are running their applications. But, you know, following the kind of 80-20 principle, this would get them to 80% of the level there. And if there is any additional nuances or patches required on top of that, they can obviously, you know, do it on top. Right now, we support, you know, a bunch of different applications. There are a few, not few, a lot more that are on the, you know, the bandwagon for things to complete. And you will see things like, you know, time series database, noDB, influx, DB, primitives, related databases, and so on being added here in the upcoming weeks as well. And there was a second part to your question, right, Michael? There was a second part, but I do want to make sure we're cognizant of the time here. So we probably had about two, we got about two minutes left to address the second part. Okay, so the second part of your question was around its belief. You were talking about these examples itself, right? And, yeah, so as I said, right, these are different application-based hopes that we are providing. They're constantly being updated, you know, we test them out internally. They are provided as a, you know, template for everyone to use. And we continue building these over and over again. And whatever the new applications that we see, the amount from customers, they will be added here. That's, you know, a proof point that this works and we've tested it well. The reason why I'm bringing this up is because I talk with customers all the time. And this morning I was on with our entire financial services, sales leadership team. And they identified one of their key workloads as being these transactional SQL databases for DB2 and Oracle offload. So you might want to take a look at throwing, adding some support in there for those right out of the box. Definitely, definitely. So we have a couple of minutes left here. I wanted to get back out and talk with Morali about, you know, where do you see this going in 18 to 24 months from now? Are we there? I mean, has the eagle landed or is this just the beginning? No, this is just the beginning. There is so much that we need to do here, right? You know, you can see the Kubernetes is going to be the pretty much standard in a cloud environment as a platform of choice. And people are also calling it a platform of platforms, right? Now you can see Red Hat is bringing the virtualization technology to the Kubernetes with a Qboard. And we want to extend our support to Qboard. You know, and then there are other distress in the platform that we need to qualify, including VMware Tanzu and anything that is happening in the public cloud, right? So this is a big play. Kubernetes is enormous, and we want to be there in every part of it. And also, you know, that's on the platform side, but also the features. We need to keep up with the features and be ransomware is one of the biggest threat. And we strongly believe the backup play a vital role in protecting against ransomware threats. So we are closely looking at some of the standards like a NIST cybersecurity framework and how we can be part of that bigger picture and looking at additional use cases in the multi-cloud environments and then also public cloud. So no, Eagle hasn't landed. He hasn't. So I think we are so excited about what the future holds for us for this year and next year. Okay. Anything that, you know, when we're done here and you get that call from Justin, who's your VP of, you know, BizDev and marketing, you know, what, what, how do we stop that call from him that says, Morali, I can't believe you didn't talk about this one thing. It's as we talked about this, like, what would that be? He wanted, he wanted me to talk a lot of things. We covered everything pretty well. I think I think, yeah, I think we are good. What did I not talk? Do you want to talk about the management console that we built, something around that? Management console, we have management console. We have a lot of good partnerships going on with, with things with, with a lot of backup vendors and other, other people like, you know, we recently answered a partnership with Veritas, which is big for us. It's validates our vision in the cloud marketplace. There is one thing that Justin wanted me to talk about. Yeah, that's pretty much it, Mike. I'm going to now go for my, my, my clunky slideshow from the current. So can you see my screen? I hope you can. This is the call to action part of it. If you want to see it in action, you can follow the link on here. They have demos, they have labs, and certainly if anyone needs to get in, in contact with them, you can see Prashantos contact information on the bottom left, or you can always send me an email. I know these folks really well. And frankly, I know just about every software company fairly well. So it's just wait at redhat.com. Morali Prashantos, thank you for joining us here today. This was, this was great having you folks on the show. Thank you for having us. Mike and Chris.