 Our guest today is Prashanto Kachavara, who is going to talk about Trilio Vault for Kubernetes, and Trilio is a new OpenShift Commons member. And the way this whole works is Prashanto is going to talk and give a presentation. He's going to do a bit of a demo as well, and then we'll have live Q&A at the end of this. So, without any further ado, Prashanto, please introduce yourself, tell us all about what Trilio is doing and what you're doing with Trilio Vault. Great. Thank you, Diane. Firstly, I'm very excited to be here at the OpenShift Commons briefing. My name is Prashanto Kachavara. I'm the Director of Product for Kubernetes at Trilio, and today I'm going to be presenting Trilio Vault for Kubernetes. I'm going to be talking about the product, the architecture, functionality, use cases, compatibility, and be following up with a recorded demo to give you guys an idea as to how the product works. So the agenda for today is I'm going to talk a bit about the company, Trilio, who we are and what we have been doing. And then we'll dive into the customer problem, you know, talking about legacy data protection solutions versus cloud-native data protection solutions and why one is needed more today. Next, we will give an overview of the product that we have built, which is Trilio Vault for Kubernetes. And then we'll follow up with the technical details of the product. Then after that, we will dive into the demo to kind of put the rubber to the road to see how everything works. And then finally, I will be providing a summary of whatever we learned on this session. So jumping right in into the company overview, who we are. So Trilio was founded in 2013. We are the leading data protection solution for OpenStack and RevEnvironments. We have enterprise customers located all over the globe, along with our offices as well. We are back by leading tech, luminaries, and venture capitalists. We have been in the business for the past seven years. So we have been doing data protection for a while. So we have come up with a lot of different innovative technologies that we have patented as well. And then from a partner ecosystem, we have we are that had an IBM partner. We've been partners for a very long time and have been doing business also for a very long time. And as mentioned over here on the slide, we had a Red Hat certified partner as well. And from a platform perspective, we have products to do data protection of OpenStack. As I mentioned, we have products to do data protection of Red Hat virtualization. And now we are pivoting to address the data protection needs of a cloud data environment. Now let's talk about the customer challenge. And to do that, I'm going to illustrate via applications and how they have evolved over time. So back in the day, we had bare metal servers with one operating system and applications running on those operating systems. These applications were tightly integrated. When I say tightly integrated, that means like all the components of that application were running within the same domain. There were dependencies of the components on each other. And that's why it was difficult to decouple or run them without the boundaries of a operating system. And the operating system itself provided a lot of base layers that were needed and which were taken for granted by the application. Then we moved into the virtualization world. When we moved into virtualization, we did not change the architecture of an application. Applications still work the same way. We're tightly integrated. All the components were tightly integrated running within the operating system boundary. But what changed was the efficiency of the underlying hardware. So virtualization did not change the architecture. It brought in efficiency of the overall resources within your data center. Now, when we get into a cloud-native world, what we see is that an application is broken down into multiple microservices. All these microservices are independent components that run within an environment which have their own identity. And in order to protect an application, all these individual components need to be protected together. They need to be backed up, not only the data portion of it, but also the metadata portion of it. So as a result, the architecture has changed. And that's why a different kind of a solution is needed to protect a cloud-native application. Let's look at why traditional data protection solution cannot match up to the needs or the requirements of cloud-native applications. So firstly, traditional data protection solutions and cloud-native applications, they are disparate technologies. You need a cloud-native application to provide data protection for cloud-native applications running in your environment. For example, you cannot have or it would be really costly to run a VM-based data protection technology to be protecting a complete cloud-native environment. So you would have to have a virtualization or VMs running to protect your cloud-native, and it wouldn't be a right fit. With traditional data protection technologies, you also have an application or its dependence that we spoke about. They are siloed, they are focused on monolithic applications. These applications do change footprint, but it is very rare. They are generally focused on the storage or the data volumes and they do not focus on the metadata layout, the topologies and so on. And from a role perspective or a persona perspective, these traditional solutions are focused more towards infrastructure admins versus developers or DevOps admins. And when we look at the cloud-native applications, as we mentioned, they are modular, microservices oriented. Each has its own identity. They are highly available. They are highly scalable. They change their footprint on demand. They are built with new languages and frameworks. They are highly automated and API-driven. If we think about all those individual components and if we had to manage them individually, it would be a management nightmare. So that's why there is a lot of automation and policy-driven ideas that are managing a cloud-native landscape. Now, let's talk about Trilio Vault for Kubernetes. But before doing that, I'm going to talk about Trilio Vault DNA or what Trilio provides and all of its data protection solutions. First, we believe in building our product as an agent-less product. We do not insert any agent within a virtual machine if you're protecting a virtual machine via open-stack products or we do not have any site cars to protect your pods and your workloads in an open-shift or a Kubernetes environment. We are completely multi-cluster or multi-tenant. We adhere to the principles of our back of whatever platform we work with. We believe in the concept of self-service UI integration. So wherever we have an integration point with a Kubernetes distribution, we integrate into that as well. And we believe in the theory of being completely scalable, linear, and tending to infinite scale as well. We are non-destructive. So there is no disruption of your existing workflows or existing applications when you install Trilio Vault or when you operate Trilio Vault in your environment. Then we follow an open-universal backup schema. What this open-universal backup schema does is not only it avoids vendor lock-in, but you're free to use your data after it's backed up to do additional workflows. And the open-universal backup schema that we use, we'll be talking more about it in further slides, provides a lot of data efficiency features as well. So you are able to eliminate a hardware data application appliance because of the innovative style of how data is copied into the target because of the underlying format. Apart from that, there are additional features around granular file system recovery that can be achieved with this open-universal backup schema as well. Now, let's talk about Trilio Vault for Kubernetes and the key attributes of this product. So going clockwise, firstly, we are application centric. So we focus on the application layer. We protect not only the data volume, but also the metadata of all the Kubernetes objects that an application comprises of. We are native to Kubernetes and OpenShift. So we are built on the Kube API server, so you do not need any other CLI or API to manage the Kubernetes environment or the Trilio Vault environment within Kubernetes. We are deployed as custom resource definitions that you can use to manage the overall product. Now, from an application deployment and ecosystem tooling perspective, what Trilio Vault can protect are your applications, whether they are deployed via Helm, whether they are deployed via operators, or whether they are deployed by labels or just have custom label tags to it. We also integrate into Prometheus for monitoring. We have dashboards available in Grafana, and our logs are integrated into Fluendi as well. Jumping on to the left side, infrastructure compatibility, we leverage the CSI mechanism to talk to storage. So as long as the storage has a CSI driver available, we will be automatically supporting that storage platform. From a target location as to where your backups are going to be stored, we support NFS and S3 as the underlying protocol. S3 can be on-prem, S3-based storage, or can even be Amazon S3 as the target. And more importantly, we are a certified technology. We are a certified operator within OpenShift. We have Cloud Platform Data and MCM or Multi-Cloud Management as well. Now, all this, we want to make it easy for our customers to understand the product, test drive it, and then work it in the other environment as well. So what we've done is we have provided live environments directly through our website that you can come and play with Trillium Wall for Kubernetes. And also from a licensing perspective, we have free and basic license available. The free license gives you unlimited number of nodes for 30 days. The basic license gives you up to 10 nodes with an unlimited time period. And then you also have the enterprise license with premium support, which is the full plethora of the innovation and features that Trillium Wall for Kubernetes provides. So I spoke about Trillium Wall for Kubernetes being that had operator certified. So we are found directly within Operator Hub. So if you go into Operator Hub today and you search for Trillium, we will be listed as an application or an operator-based application that you can install within your OpenShift environment. And after you install, this is how we will look. As part of the demo, we would be touching a bit more upon the cosmetic of the product. Next, I'm going to talk about the packaging and accessibility. So we have packaged our product as an operator-based application. So within an upstream environment, you can use Helm version 2 or version 3 to deploy the Trillium Wall operator. And then we have a single CRD that will be used to basically deploy the application and use the application after that. On the OpenShift side, what we've done is we have an OLM-based operator as well on a UBI image. And what this operator does is it deploys all the CRDs to manage the product and we are directly embedded within Operator Hub, as I already mentioned, and we will be available in operatorhub.io as well. Next, I'm going to quickly talk about the overall architecture of the product and how Trillium Wall for Kubernetes has been constructed. So at the very first layer, it is the user interaction. This is where the user creates his custom resources for backup plans, targets, backups, and restores. Within the next layer, which is the control plane layer, we have the controllers for all our custom resource definitions, which monitor the custom resource definitions and if there are any changes to them, they make or apply those changes within the Kube API server. And what the control plane does is that it helps to capture the metadata from the application and it transfers that metadata into the target location, which could be a NFS or an S3 repository, as I mentioned. And then finally, we have the data plane. Within the data plane is where the actual copy transfer happens of the persistent volumes. We do full backups, we do incremental backups, and we, as I mentioned, we keep it as an open format, which is QCao2 format on the backup target as well. Next, I'm going to talk about custom resource definitions and how the product or how the user operations generally flow. So first one is the target. So as mentioned, the user would first create a target, which would be a S3 or an NFS-based storage location where the backups would be placed. After that, you have the policy. A policy can be a scheduling policy or a retention policy, which says how often to take the backup and how many backups to keep as per your compliance needs. And then we also have the concept of hooks. So if you have stateful workloads like databases, you can QS the database and on QS using the pre and post hooks. Now, all these three custom resources are referenced within the backup plan. Backup plan is the overall definition of what you're backing up, where you're backing up to, how often you're backing it up, and if there are any intricate injections that are needed as part of the application or multiple applications that you're backing up. The thing to note over here is the backup plan can be a single Helm app, can be a single operator app, can be a single label-based app, or can be multiple Helm, multiple label, multiple operators, or can be a combination of Helm label or operators. So depending upon whatever you want to protect, it could be one Helm application talking to another Helm application which is talking to an operator-based application, you can define all of that within the backup plan. So if you were to back up your namespaces completely, you can create a backup plan which defines all the applications within it and it would be able to back up all of that together for you. The next custom resource that we have is the backup custom resource in which you provide, whether your backup should be a full backup or an incremental backup. And obviously, if it's a schedule-based policy, then it would be using that schedule to take the backup periodically as well. And once you've taken your backup, the next operation to do would be to restore. Now, your restores can happen based on the name of the backup which would be if you're within the same cluster or same namespace, or you can restore by location. If you are migrating between clusters, you would point to the location where you want to pick the backup from. And once you apply the restore custom resource definition, you will get your application back, whether it's a Helm label operator or a combination of all of those. Next, I'm going to talk about the protection and recovery. So, as mentioned, we protect the metadata of the application as well as the persistent volumes. And we do that for Helm operators and labels. So what happens is now you can back up your application from one namespace into another namespace within the same cluster, or you can back up your application from one cluster to another cluster in a completely different namespace. So not only do you need to have a backup, so not only does it enables pure data protection within your standalone cluster, but it also enables use cases like disaster recovery and migration if you're going to subsequent clusters or other clusters than your source cluster. Now, let's talk about what does Triliowall for Kubernetes back up. So as mentioned, we back up labels. If your applications just have their custom applications with just a label tag on them, we can back that up. If your applications are based on Helm, we can back your Helm applications up. Or if they are operator-based applications, we can do that as well. When we do the label-based backup, what we do is we look at the spec of all the resources and we back up the spec portion of it and all the resources that the application comprises of. So whether it is the pods, PVs, config maps, secrets, we capture all of that. Every PV that we find, we capture the data out of the PV and store it within our target location as a Q2 format as well. On the Helm side of it, we back up all revisions of your application, including the deployed revision of the release. Then we parse the chart. We identify which are the persistent volumes and we back up those persistent volumes as well. The key point to remember here is that when we back up and restore a Helm-based application using Trilio World for Kubernetes, we maintain the application type. So for example, a Helm application when restored is still a Helm application. You can still use your Helm upgrades, your Helm rollback commands to manage that application after it has been restored. Now in the operator world, what we do is we back up the resources of the operator as well as any custom resources created by the user for the operator. Then we parse any application resources that have been provided or that have been created and we back up the application as well. This is the application that is managed by the operator. Again, as mentioned, when we do back up and restore the operator, it is still an operator-based application. So you do not lose your consistency of application tooling that you had originally deployed it via. Next, I'm going to talk about the overall backup flow. This is an animated slide, so it should help everyone to understand what is the underlying workflow that we leverage to protect a particular application. So first, what we do is a metadata backup. In order to do the metadata backup, the first thing that we do is we spin up a meta mover pod. What this meta mover pod does is it captures all the metadata information, which is the deployments, the services, the config maps, the secrets, and it moves it into the target location. Then we look at the data backup or we look at the persistent volumes that we need to protect. And as part of the data backup, we spin up the data mover pod. We spin up one data mover pod for each persistent volume so that we can get parallelism in terms of the speeds and everything that is being moved into the target location. We take a snapshot of that persistent volume, we convert that snapshot into a new persistent volume, and then we mount it or attach it to the data mover pod so that the data mover pod can read from it, does a convert to a QCAL2 format, and stores it on the target location. We detach the persistent volume from the data mover pod and then we delete it. We keep that first snapshot that was originally taken. We still keep that around for incremental backups and to do a compare of the next snapshot that will be taken for the incremental backup. So now, diving into the workflow for incremental backup, what we do is, again, first pod that is spun up is the meta mover pod. The meta mover pod will, again, capture all the metadata information and put it into the target location. Then we get into the data backup phase and as part of the data backup, as mentioned again, we will be spinning up one pod for each persistent volume. We will be taking a new snapshot, leveraging CSI. We will be converting those snapshots to persistent volumes again, attaching them to the data mover pod. We will do a diff or a compare of these two persistent volumes to get the incremental changes. And then we do a Qmoo image convert, which is the internal tooling that we used for converting it to that open format, which is QCAL2, and we store it on the target. So then, basically on the target, it is stored as an overlay image. So as mentioned before, no repeat blocks are copied over. It is only unique blocks that are copied over and they point to the base image or the full image or full image blocks that were captured as part of the full backup. So it's extremely efficient in terms of how much storage it's occupying and provides a lot of efficiency in that term. Once that is done, we detach and delete the PVs. We delete the oldest snapshot. So in this case, PV snap one will be deleted and we'll keep the snap two around for the next incremental backup. Next, we'll get into the restore procedure. Restore is basically an inverse of how we do the backup operation. So we take the incremental, assuming that we are doing a restore of one of the incremental backups that we are taking. We'll first spawn the data mover pod. We will create a CSI volume. We'll do a restore of the PV. We'll attach that PV again to the data mover pod and then we do a key movement convert again to the restore PV. Again, the PVs are detached from the data mover pod to complete the data portion of the operation. Next is the metadata restore where we spin up the meta mover pod for the meta data restore. The application metadata is again restored back into your namespace or particular cluster and then all the application specs and PVs, PVCs are restored to the restored PV. Obviously everything is done via the CSI interface. So now let's talk about the use cases over here. So based on how TrilliaVault can do its backup and restore, there are a plethora of use cases that are enabled. So basic backup and recovery. You can schedule on-demand jobs. You can have full and incremental backups. You can restore into a new cluster or into a new namespace into an existing cluster and you will have full or selective app restore as well. Now if you are doing disaster recovery, you can use the same technology to take your data and recreate it at another cluster. Again, your clusters can be hosted on-prem. It can be in the cloud. As long as it's a Kubernetes cluster, you will be able to do your disaster recovery. Application mobility. From a test step perspective or from a CI CD perspective, you can backup production environments, recreate it in your dev environments, move it into test, all using TrilliaVault for Kubernetes, and enable a smooth CI CD workflow from production to test dev and back and forth. And now, as mentioned, today is mostly a hybrid world where people have on-prem deployments of resources or Kubernetes clusters and they have public cloud deployments of their Kubernetes clusters and various distributions. So in order to maintain your costs or to avoid vendor login, you can again use TrilliaVault for Kubernetes to take that data from one environment and move it into another environment. And you can maintain a solid TCO and get more for your bug. Next, I'm going to talk about monitoring and logging. So TrilliaVault is integrated completely into Prometheus and Trafana. What we do is all the metrics from our custom resource definitions, whether they are target backups, backup plans, restores. We send all those metrics. We have defined a few bunch of metrics and we send them to the Prometheus server. We do the visualization through Trafana. And what we've done is we are providing almost about 10 dashboards as part of the product, which will be useful to monitor each and every instance of an application or instance of a backup within your TrilliaVault environment. So we'll provide about 10 dashboards, but users should be free to create their own dashboards as part of the AgraFana instance as well. Now, from a logging perspective, we are integrating into Fluendee. So all your logs across all your namespaces, wherever you use TrilliaVault for Kubernetes, will be available within Fluendee. So you have a single source of truth for all your logging information. And from a monitoring perspective as well, you have Prometheus and Trafana, which would be a single source of truth for your cluster. Now, talking a little bit about the metrics that we expose, so there is informational objects or informational items that we provide around the backup plan, target backups. We have information around the status as to how much of the backup has completed, the duration of the backup to understand what the transfer speeds were. You also have how much backup storage is a particular backup using and how much free storage is available on a particular target. Also, information about restores, how long did the restore take, if the restore has completed. And finally, we have health information about our controllers as well. So not only can you manage, monitor your TrilliaVault for Kubernetes application itself, but you can manage and monitor your backups of your other applications within your Kubernetes clusters. Next, I'm going to talk about the security and permissions. So from a security perspective, TrilliaVault does not require or does not depend on any admin access or any cluster admin roles or privileges that exist generally within a Kubernetes cluster. We leverage the existing security context constraints that are provided via OpenShift. And from a service account perspective, we provide an additional layer of security by creating our service accounts at runtime or at execution time. So as a result, those service accounts cannot be used anywhere else. We have our own set of permissions and capabilities that we define as part of the product. And all these definitions are provided in our publicly hosted getbook documentation, which is at docs.trilliola.io. We are also working on building a much more intensive and detailed product definition page which talks about all the security aspects of the product in terms of how the application is built, how the application is distributed. So all intricate stuff and useful stuff our customers will be provided within our user documentation. Next, I'm going to move into a quick demo of a Helm app on OpenShift. So what I'm going to do here is I have a record demo of how we do a backup and restore of Helm applications on OpenShift. And what I'm going to be doing here is I'm going to be showing the BLI or the non-UI side of it. And I'll also be showing the OpenShift UI to show what the CLI implications were. So as part of this environment, we are running OpenShift 4.2. We are using Helm version 3. We are using the OpenShift operator for storage. We are just using a host path CSI driver. And as part of this overall demo, what we are going to be doing is we are going to be deploying just a simple demo app. We are going to be backing up that app by a Trillion Wall for Kubernetes and then we'll restore it and make sure that whatever we had backed up is still available and running. So with that, I'm going to start the demo and I'm going to speak over what whatever is happening over here. So as you see, first, what we are going to be doing is we will be deploying the application which is a Helm app. And once we have deployed the application, we'll do a Helm list to see that the application has been created properly. We can see the application pods that have been deployed as part of the Helm command. And what we'll do is we'll ensure that the application and the PVCs are all up as part of the deployment. And then finally, we'll ensure that the app is running. It's a simple Hello World app that has been deployed. And now this is the application that we are going to backup and restore using Trillion Wall for Kubernetes. The first thing to do would be to get the target. Target, as I mentioned, is where we store our backup. And here in this scenario, we have a S3 target which is in Amazon. So next, what we do is we will ensure that the target has been created and is available. We will define the protection plan which is the backup plan for the application which defines or says to backup the overall Helm application as defined in the backup plan. Next, what we'll do is we'll go into the OpenShift user interface and just confirm that we have our backup created over there successfully. So in the OpenShift user interface, if you go into Operator Hub and look at the install operators, in our case over here for this demo, we are in the OpenShift marketplace. And you can see it really over for Kubernetes has been installed. And now we have the backup plan which defines the Helm application created. Moving back into the CLI portion, what we'll do is we will trigger backup of that backup plan or the protection template that we had specified. And next, we will do a get backup to see the status of the overall backup operation. You can also get the details of whatever is happening as part of the backup operation. To confirm this on the UI side, we go into the backup tab and we see that a backup has been created as per our operation that we executed on the CLI. Going back to the terminal, what we'll do is we'll list the backup, make sure that it is available. You can see that that is, it is a full backup. There's start and end time available for the backup as well. And then what we'll do is we'll first ensure that the namespace that we are going to restore this backup into does not have any workload running already. So the restore namespace is empty, as you see. And we will also, so first we'll apply the restore and then we will do a get on the restore to see that, okay, the restore has started. It is in progress and the objects are being copied over. We can move over to the OpenShift UI. I can go into the operator, operator instance that was installed in OpenShift Marketplace and we can look at the restores over here now. And now we can see that a restore has started. It is in the validation step. So basically we first validate that everything should be able to restore and then we start the restore. So once the restore is in progress, you can describe the restored object to see what is happening as part of the overall process. And then once the restore has completed, you can do again a Helm list to see that your app has been restored successfully. As mentioned earlier, when we do a backup of any application, whether it is a Helm application or an operator based application, we maintain the deployment tooling consistency. So because it was a Helm application, you can still use Helm list to list the restored app. We can also verify that all the pods have been restored correctly along with all the other objects like deployments, replica sets, persistent volume, claims, PVs themselves. And then finally, final test would be to expose the newly restored app via a service and a route. And then we can ensure that the app has successfully been restored and do a new namespace. We can confirm everything on the OpenShift UI as well. We can look at the project, the restored namespace project to ensure that our application is available over there or all our application objects do exist over there. We see our front end MIC collab over there. And we should be good to go. So as part of the demo, I also want to kind of move you through a quick idea of how we also can be manipulated or managed via the UI. So this is a 4.4 environment that I have in front of me. We have Trillia World for Kubernetes. We can use the YAMLs within OpenShift to manage or create any custom resource definitions. And we also confirm to the dynamic forms that are provided via OpenShift. So if you do not like to use the CLI as much, you can use these dynamic forms, which we have been working closely with Red Hat on, to ensure that we can provide the best experience to our end users in terms of the UI. So basically, this dynamic form is a translation of the YAML that you see over here. So depending upon your choice of interest and your choice of tool, you can use the YAML as your choice of tool. You can switch between one or the other. Next, I'm going to talk about compatibility and support. So Trillia World, as we mentioned, has been built ground up, leveraging the QVAPI server and we integrate directly into the Kubernetes concepts and the overall architecture. So there is nothing new that you need to learn or understand to manage or operate Trillia World for Kubernetes. Now, because we have built a ground up addressing or aligned with the Kubernetes constructs, you can use us in any upstream Kubernetes environment that supports CSI. So CSI was supported in upstream, I believe, from 1.12 onwards and from 1.12 to 1.16. The CSI feature is in alpha stage. So what we recommend is for those environments to use Trillia World for Kubernetes for test step purposes and then once when CSI did move into beta in 1.17, you can use that for production as well. From an OpenShift perspective, obviously, 4.1 to 4.3 was when the CSI feature was introduced or 4.1 to 4.3 started using Kubernetes 1.12 and higher and in those environments, the CSI feature for snapshots is still in alpha. So we recommend to use it for test step environments and for prod from 4.4 onwards. From a storage perspective, as I mentioned, we are completely agnostic to the underlying storage as long as you use a CSI driver to manage your storage environments. CSI is the de facto protocol that is going to be used moving forward for working with storage via a Kubernetes interface. So we are aligning ourselves also to kind of keep all those pieces agnostic and ourselves to be able to move and maneuver with any storage platform. From a target perspective, in our demo, I showed you that we had an Amazon S3 target, but you could have any S3-compatible storage which would be hosted on-prem as well. And also we support file-based storage that is via NFS. So if you have an NFS server on-prem or even in your cloud environments, you can use that as your target for storing. Ecosystem tooling, we have, I spoke about our dashboards which are available via Grafana. We have integrations with Prometheus and Fluendee for logging and monitoring. And because we are agnostic and we have built a ground-up addressing or aligning with Kubernetes, you can run us in any cloud wherever you would want to deploy OpenShift, whether it's in GCP, AWS, Azure, IBM, or if you have on-prem managed distributions. As long as it's Kubernetes, truly a word for Kubernetes will work with it. Okay. So I'm going to get into the summary, just going over the points that we have discussed today. So first thing is, cloud-native applications are completely different than traditional applications. There is a lot of independent components that need to be managed and protected along with the metadata and the data. And traditional approaches that we have all been used to before a cloud-native world to not satisfy the challenges or the requirements of cloud-native world properly. We are truly a word for Kubernetes as purpose-built as an operator to protect cloud-native applications. As I mentioned, we can protect your helm, operator, or label-based applications. We are Kubernetes-native. We fit into the ecosystem via Prometheus, Kavana, FluentD. We provide cutting-edge features in terms of open backup formats, not having the need to or by a data application or by a data application appliance for your target storage along with a lot of security, fundamental principles that have been applied within the product. As a result of these features, we enable a plethora of use cases on TrilliaVault for Kubernetes and I had mentioned backup and restore from a point-in-time recovery perspective to protect yourselves from threats like ransomware or data corruption in general. You can use it for disaster recovery, for moving from one cluster to another. You can use it for CICD pipelines or for test-dev application migrations from one cloud environment to another cloud environment from a security perspective or from a cost perspective. And a key point to remember here is that TrilliaVault has been around for a long time. We are the leader in terms of data protection for open stack and at virtualization environments already. So a lot of the copy transfer protocols, the underlying way of how you take data from one environment and move it into another environment has been matured very well over time. So we are very confident about how we do our technology and we are very proud of how we have built these copy transfer protocols which provide a lot of savings to the end user. And with that, I would say that I would invite all of you to try out TrilliaVault for Kubernetes today. You can actually log into our website which is Trillia.io. You can watch a demo of the product. You can run a test drive the same demo that I was showing. You can actually run it in your free time directly from the Trillia website without having any infrastructure created for yourself. And then you can download a free trial or a basic edition. So as mentioned, we want to provide developers and IT administrators an equal opportunity to use the product in their own scopes and in their own landscapes. So we have a basic and free edition, the free edition which provides unlimited nodes for 30 days and then basic edition which provides 10 nodes for an unlimited amount of time. And then finally, we have the enterprise license as well which provides support to all these cutting edge features that TrilliaVault for Kubernetes provides. So with that, thank you for giving me the opportunity to speak here and introduce the product. My email is prashanto.coachovara at Trillia.io. My Twitter handle is at Coachovara. So if you have any questions so later on or even now I can now take them and answer them for you. Yeah, well, Prashanto thank you. I knew that I had seen the OpenStack offering that you had and I had not seen that's an in-depth thing around what you were doing with Kubernetes and OpenShift. And so I'm really I'm overwhelmed with like how awesome it is to be and I sound feel like I'm gushing or something but I kept thinking to myself, man, what didn't you think of? But you answered in the last slide one of the questions that came in from the YouTube was about being able to do the workshop on their own. And so I I encourage everybody to go over the Trilio site and take a look at their online lab there as well. There is one a couple of questions and one of them is can backups be leveraged with third-party tools? For example, can I security scan backups? Correct. So as because we keep our backups in an open format which is a Qo2 format you can leverage them as you want to with external tools scanning them connect the S3 target and you can do all those intricate post-access operations as well. So because we keep it in an open format you are able to do all of that. And there was one other question prior to that as well which you may have answered but I'll get that are there any predefined RTOs for the backup app then RTOs being recovery time objectives? So you can define so if you're we have basic scheduled cron jobs that you can define as per the backup plan the cron jobs would obviously maintain a RTO not an RTO RTO would be more based on your how you within an organization what is your SLA in terms of getting the application back and up but from an RTO perspective that is what we provide through policy scheduling. Hey Prashanto Chris Short here I want to like thank you and then drive home the importance of staying Kubernetes native right like if you stay in that lane you open yourselves up to all the availability of the entire ecosystem can you talk about the importance of staying cloud native and Kubernetes native a little bit to the Trilio platform? Definitely because everything the concept of Kubernetes is to make sure that you are focused on deploying applications and time to market with applications for customers that is what Kubernetes is kind of that's the idea or the principle behind it so there are a lot of products which are interfacing with a central API and are leveraging that central API which is the QB API server to build their technology so if as a product you align with that central API and our Kubernetes native then you automatically open yourself to integrate with all these other products as well if you are not integrated or if you're not Kubernetes native then what would happen is you would have to have another translation layer to make your product to work with another third party ISV or any other vendor product so as a result in terms of how we have built our technology integrations into prometheus integration into fluently become actually very simple and tomorrow if there are any other ecosystem products that a user would want to integrate into it would be very straight forward as well if they are you know just leveraging via the APIs and managing both products together so important to utilize that control plane in a consistent manner so definitely Kubernetes is because staying Kubernetes native is so important for folks and I just want to drive that home definitely and that's that's one of the reasons right you know traditional approaches which are kind of VM based approaches state we need those approaches or that approach needs to change to a cloud native approach so that you can have a complete you know just a Kubernetes self-sufficient environment versus having to run you know Kubernetes on one side and you know virtualization on the other side and stuff like that awesome thank you so much for wonderful presentation yes so one question to you is it seemed like a very comprehensive offering so I almost hesitate to ask what's on your roadmap what's coming down the pike for Trillio Evolve because like it's sort of like asking the question what is missing but you know what are you what are you working on so thank you actually that's a great question for me to kind of educate the users on so from a roadmap perspective we are going to be geeing the so we only have an early access version of the product which I was showing you within my demo which is available but then operator hub and for upstream environments now by the end of this month or early July we are going to be geeing our product and as part of our cadence our target is to push out code every month so you want to do it by the first of every month or at least that's our target or pushing out new features and we will be doing that cadence from July onwards now from a future perspective there are a lot of cutting edge things that we have been thinking about in terms of you know providing compression encryption and one of the most important things that I would like talk about is a restore plan so similar to a backup plan you know how you specify what you want to back up as part of your application we are working on something called as a restored plan in the restored plan you can basically specify how you want your application to be laid out after it is restored are there certain objects that you want to manipulate before you restore it or are there any additional workflows that you want to inject after the restore has been done so that actually opens up additional use cases and additional opportunities for customers to manage their data and manage the applications in a much more succinct fashion so that's one of the cool or important things that you're working on and the cooler portion of it at least what I feel the more impactful from a customer perspective also is we will have a separate user interface for Trilliowalt for Kubernetes so that if you have multiple Trilliowalt for Kubernetes instances running across namespaces or multiple clusters in general you will have a single source of truth to manage all those TVK instances so that will give the ability for you know doing much more that will basically make the management of your TVK instances easy and also make you know management of your backup and restore operations easier and that will basically open up future conversations and capabilities for us to do drag and drop migrations and you know things like that I think that's sort of also answered but I'll I'll ask it again the question that just popped in the integration into AMA and MCM is on the road map yes yes so we as part of our GA when we do GA we are going to be cloud pack certified we are you're going to be cloud pack for data certified and we as a stretch goal we are also targeting MCM certification and then we would be integrating into MCM to so that there are three layers or the three steps that we have kind of figured out on our and how we'll be doing the integration first will be mostly you know basic UI SSO based integration followed up with much more detailed ground level integration between the two products so I think one of the things Chris and I were chatting separately here is that we'd love to have you come back in during maybe not during the briefing session but actually do a live coding have couple of the developer and evangelist team create a cluster have it install the backup and restore with Trilio and then walk through restoring live so if you're up for that kind of a challenge we'd love to have you back to do something along that line yeah I would love to that would be really fun yeah once you're ready to GA and show it off yeah I'd love to have you back on live streams here yeah definitely I think what we could probably set up is we can do label based hand based operator based backups show some things how they are looking in promises Grafana along with logs I think that should provide a good idea of you know how the product flows works and you know basically putting the rubber to the road as well yeah awesome that would be awesome because this is I kind of it's like magic it's like you've thought of pretty much everything and I can't wait to to see it in action live as well and when you get to GA we'll have you guys back and we'll make sure that this is something that we all can take advantage of it's wonderful to see it in operator hub and operator hub.io already I really encourage everybody who's listening in whether it's here in the blue jeans or youtube or twitch or facebook to take a look at this try out the the online lab I think that's pretty cool to have that available to just try it is that catechota based? yes it is catechota based that's what it looks like love those catechotas they've done lots of great stuff with us and the partners and other commons members but it's wonderful to have you guys as part of the OpenShift Commons and in operator hub and looking forward to the GA date we'll all celebrate but in the interim please do everybody take it for a test drive and looking forward to seeing how this rolls out in the future so thanks again for taking the time for Shanto and for the other folks from Trilio who are online answering questions Justin and Carl thanks for for coming and making this happen