 Hello, everyone. Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Amy Talastro, and I'm a CNC Ambassador as well as a Product Marketing Manager at Camunda, and I will be your host tonight. Every week, we bring a new set of presenters or presenters to showcase how to work with Cloud Native Technology. They will build things, they will break things, and they will answer your questions. Join us every Wednesday to watch live. This week, we have a great presenter and amazing content coming up. We have Ben Morrison here to talk to us about EU Court-Edged Mobility and Resiliency for Cloud Native Applications. As a house grouping, as always, this is an official live stream of the CNCF, and as such, it is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of the Code of Conduct. Basically, in a nutshell, please be respectful of all of your fellow participants as well as the presenters. With that, I'll hand it over to Ben to keep up today's presentation. Thank you so much. It's a pleasure to be here. Thank you for having us, CNCF. We always enjoy working together with our partnerships. Today, we'll be talking about how to improve Cloud Native Mobility and Resiliency with Core-to-Edge capabilities. As Amy said, my name is Ben Morrison. I'm a solutions architect here at Trilio, and we'll go ahead and dive right into it. Here to start off, in terms of Core-to-Edge and the Cloud Native Challenge. When it comes to Core-to-Edge, organizations are mostly turning towards this architecture method with the Cloud Native Applications for a variety of reasons, but it really boils down to essentially wanting to have those actionable insights and to improve services for their customers and improve that customer experience. At the core of what some of this consists of would be mobility of that data across the Core-to-Edge and all of those IoT components sitting on the Edge. Improving those IT operations, they're actually happening on the Edge. This is a big piece in terms of why people are moving towards K3S and other Kubernetes functionalities to operate their Edge applications. Then on top of that, an important part of this architecture is the resiliency in two different forms here. One being the resiliency of the infrastructure and security, especially when it comes to ransomware protection or malware protection, and then also the Edge Cloud resiliency, keeping that resiliency of your actual applications up and running on the Edge. Real quickly, some statistics here about the challenges of Kubernetes and users moving towards this Cloud Native methodology of Kubernetes itself. On the left-hand side, we have some statistics of a survey that was taken actually all the way back in 2017. The reason why we show this is because the storage component is what we really want to look at here when it comes to stateless applications running on the Edge with Kubernetes and stateful applications. All the way back in 2017, users reported a top challenge. Of course, their number one challenge being security and caution of those ransomware malware attacks, but then also the concern of storage itself, those stateful applications running on the Edge or in their Kubernetes clusters instead of just stateless. Already back in 2017, indicating a trend towards those stateful applications running on Kubernetes. Now we know, of course, with any new adoption of any new technology, especially when it comes to infrastructure and architecture, we always start off with those stateless applications but start to move towards stateful. We saw the same thing here with Kubernetes. At first, early adopters were all running stateless applications without any of that data actually on their Kubernetes cluster or on their Edge clusters. Now we're finding that users are moving more towards those stateful applications. Just setting the theme for the conversation today. On the right-hand side, we have a CNCF survey, which is from 2020, I believe it is. I know CNCF will be coming out with a new survey sometime soon. This data on the right is about a year old. But even here about a year ago, we can see that over half of Kubernetes users were running stateful applications in their clusters. 22 percent were not. They were only running stateless. But we also found that 12 percent were evaluating stateful applications and 11 percent were playing to move towards stateful applications in the next 12 months. Here, just again, setting the stage for how to ensure those stateful applications are being appropriately migrated from Core to Cluster and making sure we have that resiliency across Core to Cluster in our migration scenarios. Here, just to paint a picture of the personas that all manage those Cloud-native applications, we have a variety of characters here, Lisa, Brian, Rob, and Jane. Lisa is a developer, more of that high-end manager type, and we have then Brian, SRE, as more responsible for making sure the apps run successfully and monitoring those applications. We have Rob, who is more on the ops side, and also a bit of an SRE, and then Jane, who is strictly on the ops side. The reason why we want to go over this in terms of the variety of personas is because that is one of the huge advantages of Kubernetes, is the namespaces and the ability for so many different personas to access your cluster all over the world. At the same time though, that does post challenges at the same time. In terms of security, every single individual that's interacting with a Kubernetes cluster or a certain namespace or a certain application or workload, that poses another avenue of security risk with another person touch basing with that Kubernetes cluster. On top of that, it also requires a lot of organization and management at a granular level of the applications running in those clusters and on the edge, and so essentially just getting at the theme of the security and the granularity that we need to make sure we're managing when talking of migrating from core to edge. Here when we look at core to edge in terms of how it comes to Kubernetes, one example we have here, just there's a variety of edge computing you can use to run those applications at the edge, but one example here would be a Rancher K3S, which is actually the edge cluster we're going to be using in a demo today. Then we have the Rancher RKE, a core cluster today for a demo. We'll actually be using EKS, but here in our example, we have Rancher and then the connection to AWS to the cloud itself. Just outlining that architecture of connecting and migrating that data from K3S to core and connecting it through the cloud itself, and the main characteristics, again, you want to worry about or be aware of when it comes to this migration architecture, is the resiliency of those applications, the mobility of those applications, and the data curation at that granular level. When it comes to resiliency, as I mentioned, there was two different pieces. The first one being security. So we want to make sure we are aware of any security pieces, especially when it comes to ransomware and malware. So today we'll be talking about two core, I'd say the two most important aspects of that security piece when moving cloud native data from core to edge. That first piece would be the immutability and the second and very important piece would be encryption itself. And then secondly, you also want to think about your granular application control when moving those workloads back and forth. Of course, those edge clusters are usually much smaller than the core or any other typical Kubernetes cluster. And so we want to make sure that we have consistent support from edge to core. We also want to make sure that we have enhanced cloud migration capabilities. And then lastly have any granular application capture to keep at that granular level and mitigating what actually goes to the edge and what does not. Perfect, there was actually an audience question. I think it would be super helpful to take right now. So if any user asks, different people have different understanding of stateless and stateful applications, can you please give us an understanding of both technologies? You kind of mentioned it, but maybe go a bit deeper so that we're all in the same field. Sure, absolutely, absolutely. And please feel free to interrupt me further than any questions and clarifications like that as we go along as well. I want to make sure everyone is on the same page as we go through this. So stateless is defined as an application that does not store data on the Kubernetes cluster itself. It's mostly service-based. It does not store any data in a persistent volume in any sort of way, whereas stateful is either that femoral or persistent data sitting in the cluster itself. Some sort of storage of data within a persistent volume. So an easy way to think about this would be stateful uses a database. It uses MySQL, MongoDB, Cassandra, any of the databases out there. Stateless would not require a database at all. There's no data that needs to be stored with stateless. Perfect, thank you so much. And thank you so much to Femisu for the question, keep them coming. Absolutely, yep, great question. So moving on here, the first piece we're gonna talk about under the security aspect is going to be ransomware. So as many I'm sure probably know at this point as the conversations of ransomware have constantly been increasing in the past year or two, especially during the pandemic. But just to go over the exact definition, ransomware attacks are defined as a cyber security attack where a malicious actor or an organization gains access to an organization's software and encrypts it and then holds it for ransom. So obviously a situation that we've heard of before, we've heard of many of those ransomware attacks happening over the past couple of years. And so right now from statistics, we gathered from 2021, we found that there is on average about 300 million cases of ransomware attacks happening each year and probably projected to grow as we continue in the future. There's no guarantee with these ransomware attacks that the attacker or the malicious actor will actually unlock that data. So organizations need to make sure that they have a contingency plan, some sort of plan for how to recover after a ransomware attack, especially when it comes to data moving across the core and the edge itself, which we'll get more into here. The average cost of a single ransomware attack is increased by 171% from last year to this year. And so it is now an average cost of 300,000 approximately, just over $300,000. And that attack has increased, those attacks have increased 72% during the pandemic itself. And here, especially this last piece, I think is gonna pertain towards that core to edge architecture in terms of the increased access to business data over mobile devices has increased vulnerabilities by 50%. And that especially is relevant for our core to edge cases, right? Because we have so many more devices out there on the edge. We have so much more data going over our networks and we need to make sure that those vulnerabilities are secured as best as they can be. And so that's some of what we'll be talking about today, how you could use Trilio or any other solution like Trilio to make sure you have that ransomware security. So there's a few different ways organizations have been going about combating ransomware, but the two major institutions out there in terms of building best practices in the two most internationally recognized institutions would be NIST, National Security of Standards and Technology and then NCCOE, the National Cybersecurity Center of Excellence. And so this is a good rule of thumb if you've not heard of either of these two organizations, there'd be best practices to follow their guidance in terms of how to protect your core to edge architecture against ransomware attacks. And so ransomware protection is obviously more than just a single feature. It's an entire strategy, an entire comprehensive approach to how to protect against ransomware and then also recover from a ransomware attack. And so as I said before, many organizations Trilio included has chosen to follow the NIST and NCCOE solutions in terms of best practices for going about protecting against ransomware. And what we have found on our end is that essentially these two main frameworks, the main components boil down to three categories. First one being identify and protect, the second one being detect and mitigate and then lastly being recover. Just to go more in depth with those that identify and protect would be searching and looking for any vulnerabilities within your system itself. Detecting and mitigating would be detecting any malicious actors, any malware, any ransomware within your core to edge architecture. And then the recoverability would be after an attack itself has occurred, having some contingency plan, having some plan to recover that data and not have to pay your ransom or pay the ransom itself. Also to align with this, Gardner has recently come out with a report that you can look up, I think we'll be able to share in the chat or in the banner itself at some point here during the presentation. You can also Google it yourself, how to prepare for ransomware attacks and here even Gardner, you can see in the abstract is outlining a pre-incident preparation strategy, a strategy you need to have to identify ransomware attacks are happening, that second piece, the identify piece and then that third piece, training for all staff for a post-incidence response and even scheduling drills is what Gardner recommends here. And so part of that post-incidence response would be something like recovering your data using some sort of backup recovery solution such as Trillia. Another important note to think about when thinking about your architecture of migrating from edge to core or core edge would be that zero trust architecture, which essentially we define as a strategic initiative that is really rooted in the principle of never trust, always verify, which essentially means that you are constantly requiring the authentication of all users within a Kubernetes cluster and application. Remember that image we had at the beginning of presentation so all four of those users would have that constant authentication and then also constantly validating the security of all possible vulnerable points within your system as well. So getting a little more depth into this cloud native challenge of ransomware insecurity when looking at edge to core architectures, you have two different components here, two entries of attack where you want to make sure your security is handled. First would be the Kubernetes management console itself. As you can imagine, you have to have security software to make sure that you're identifying and protecting against any sort of cybersecurity attack. And then secondly would also be that storage media that target or third party external storage, whether it be S3 or NFS where you have backups stored for in case the events of a ransomware attack happens, you want to make sure that those backups are secured and safe and that you can rely on them to be properly restored and then not have to pay that ransom itself. And so with that we'll get into the encryption and immutability pieces we're going to talk about. This here is a variety of features that Trilio has instigated and I know many of the other backup and recovery solutions out there have some pieces of these features themselves. But for Trilio, we took this security piece very, very seriously, especially as we see an increase of stateful applications in Kubernetes and an increase of use of core to edge migration using Kubernetes. So this is a variety of the features that we outlined in our solution that align to that identify and protect, detect and mitigate and that final recover piece. The two main ones I want to talk about would be backup immutability on that identifying and protect piece and the encryption piece as well getting to that core protection. So when talking about a backup solution for recovering in the event of ransomware attacks you want to make sure that you have the backup immutability piece. So not just immutability of an entire target like an entire S3 bucket, but very granular immutability. So you can go in and make each individual backup plan of each individual workload you have moving across your core to edge environment making sure that each individual one is actually immutable. So when you get to that granular level you can not only save costs because having a backup be immutable creates more storage within an S3 target and so you can minimize your costs by getting more granular. But you also have individual keys to that immutability instead of an entire S3 bucket being immutable each individual backup plan is immutable. And the same piece goes for encryption as well. One, you can get more granular and be more cost effective. So instead of encrypting everything within an S3 storage target you can encrypt down to the backup level and select what you want to encrypt and what you do not essentially selecting what is the most mission critical data and which is not. And then secondly, having that backup level encryption means that you have a essentially a different key. You have a different encryption key to every single backup itself. So if a malicious actor were to try to attack these backups and maybe get into the S3 target itself they find that each individual backup is also encrypted. Adding to that another level of security for your backups themselves. Now getting more into that granular workload protection piece we talked about the security before in the importance of security when it comes to core to edge migration and core to edge architectures. Secondly, we have that granular workload protection you wanna talk about. So you wanna make sure that you are able to protect your applications at the namespace level. So you can migrate entire namespaces or if you have any data loss when transferring data from core to edge that you have an entire namespace backed up itself. Beyond namespaces alone, we actually have added extra functionality in terms of application backup itself. And so you can backup based on helm operator labels and you can select which helms and operators you want to backup within a certain namespace itself. So again, getting very, very granular. I'm sure especially with the labels piece many of us are familiar with labels and keeping them organized with our workflows themselves. And so you wanna make sure you have a backup solution that allows you to select individual labels to get to that granular level. And I did see there's a question. Go ahead, question there. Yeah, there's the question. How do you secure the encryption piece? Sure, so the encryption keys themselves I might have to follow up in the exact framework we look at in terms of the encryption piece but essentially through Trillio what we do is we utilize the S3 target encryption capabilities. From there we've been able to slice it down essentially so you can encrypt at that backup level. But the answer that would be essentially it relies heavily on the S3 target you're using and how you choose to protect those encryption keys through that S3 target itself. We are simply utilizing that functionality of the S3 target you choose to use. So I hope that answers the question. We relies mostly on the S3 target itself when it comes to the encryption keys. Great, I hope it did. And if not, then you just let us know and we can then take it from there as well. Thank you so much for the question. Yep, absolutely. Thank you. And then that last piece when it comes to granular workflow protection would be that migration workflow piece as well. With Trillio we've designed a disaster recovery slash migration tool to quickly pull applications from one cluster into another cluster which would be that easy way that you can actually perform that migration itself. With a tool like Trillio you're performing that migration through your target through your S3 bucket or NFS storage. And so essentially you would do a capture of a backup of an application running in the core for example then that backup would be stored on your S3 or NFS. And then you could recover it to your K3S edge cluster which is actually the demo that we're gonna be doing today. And so having that workflow piece, some sort of tool where you're able to easily have minimal clicks to restore a backup from the core into K3S is the format that you want to have. In the demo today we're gonna be showing how to do that migration piece but I'll also note that if you have more questions about Trillio specifically we do have newer versions available now that have since come out more recently that do have that specific disaster recovery tool. Then secondly we wanna talk about enhanced cloud migration features. And so as you can imagine when you're migrating applications from the core to the edge you're likely using just different distributions. You have very different clusters that you're migrating from. You're probably not gonna have a homogeneous system when it comes to those two different clusters. And so we've included something called restore transforms, inclusions and then also exclusions. First piece there, the restore transforms is really neat and it definitely is very essential for this core to edge migration. What you can do with these transforms is actually change the metadata itself of an application before it's restored into the edge environment. So for example, if the core cluster and your edge clusters are using two different storage classes you can use a transform to go in and change the metadata before you pull that back up into the edge cluster and change that storage class itself before it's restored into the edge cluster. For inclusions and hooks or excuse me for inclusions and exclusions this just helps you get very granular. So this means that once you do a backup of let's say an entire namespace on your core cluster but you do not wanna restore the entire namespace to your edge cluster you can get very granular to include certain components and exclude certain components so that you're really optimizing your usage of that edge cluster. Because I said before it's gonna be a significantly smaller size and you're probably not gonna be able to run everything that's on the core or you shouldn't be able to run everything that's on the core. Then secondly here when we talk about these migration features we have hooks as well. At Trilio we use pre and post hooks for before and after a backup occurs which allows you to essentially execute any command that you need to before and after that backup actually takes place. And so this especially gets into databases and properly quiescing a database make sure it's in a capturable state but it can also be used to have any other sort of system integration or automation within your system itself. So you can utilize these hooks and input commands however you wish. And then our last piece here I believe is the last one is that application consistent support. And so you wanna make sure you have application consistent backups through those hooks themselves as I was just mentioning that quiescing in the database. In terms of Trilio our database support is very extensive it's actually more than what's listed here. These databases you see listed here are just examples we have in our documentation. If you wanna take a look at one these hooks would look like we have Cassandra, MongoDB, MySQL, MariaDB, et cetera. All of these examples in our documentation but the nice thing about hooks is that it's very easy to become compatible with any of those databases out there. So I personally have rarely ever seen an issue where we didn't have database support with Trilio. And so make sure that your backup solution is compatible with whatever database you are using. Okay, and then I think we're gonna switch to our recorded demo here. I'll pretty much play this all the way through and then talk through the demo itself because it's a core to edge migration. You know, we wanted to capture this beforehand to make sure everything was smooth. And so what we're doing here is we are gonna be migrating an application, a WordPress application from our primary cluster, which is an EKS cluster you can see in the URL up at the top and migrating that WordPress backup to our K3S cluster. The primary cluster is EKS as I mentioned the K3S cluster is a Rancher cluster. So here we're looking at that admin view we're looking at the multi cluster management and the Trilio UI showing both of those clusters but we're going to move to just the K3S cluster for the sake of this demo. Pretty much simulating what it would be like to be a user with only permissions to that K3S cluster along with permissions to the target itself to grab that backup and what that piece would look like. So here in our K3S cluster you can see again that URL at the top we changed Windows and we are now in that Rancher K3S cluster. We can see some application discovery in the middle this is essentially just showing all of our name spaces and showing what is protected and what is not protected on this cluster. So right now let's just switch it over quickly there and then we'll go back here that we are okay we're back here at the K3S cluster just showing our namespaces themselves what's protected what's not protected and this is a fresh K3S cluster. So as you can see there's really no applications running here there's no backups that have occurred but we do have that restore NS3 namespace which is where we're going to be restoring our WordPress application from again migrating from the core into this K3S cluster. So to do that we're going to take a look at our resource management tab and move Oh, I'm going to go back there for a second see if I can capture that I know that you have speeds here. What we're doing is we're going to be looking into here we go we're going to be looking into our target browser itself. So here what we did is we connected our S3 bucket that target I mentioned where we're going to be going through going core to S3 target to K3S edge cluster. So here what we've already done is we've done that backup from the core to the S3 target piece. We have our target here demo S3 target where that backup is stored of our WordPress application that was originally in the core and now we're going to launch a target browser. So from this window what we're doing is we are looking into the target browser looking at that backup that was taken from the core cluster and then we're going to be restoring it into the K3S cluster running on the edge. So here going into that target browser we can see an Angel Beats backup plan which is where this backup occurred. Again backing up that WordPress application from the core. So here looking at this backup we can see our WordPress backup sitting here was taken some time ago and we can go ahead and restore that backup into our current cluster that K3S right from this window here. So going to simply name that restore one, two, three, four and then find that restore namespace we want to get into. So bringing that entire namespace into the restore NS3 namespace of our K3S cluster. Here we're flagging skip if already exists. This is just a variety of flags that we offer in terms of, let me go back there just to check on that. Variety of flags that we offer just to get again very granular with the skip if already exists as you can imagine if an object already exists on that K3S cluster that Trilio is trying to restore Trilio will know then to just skip that object entirely to not have duplicates of that object. Again making sure you're optimizing and being very conservative with the space in the workloads that you have running on your K3S cluster. You can also of course have patch if already exists or omit data as we talked about some of those exclusions themselves. Here then we have our transforms you'll see transforms, exclusions and hooks that we just talked about. The transforms is the most interesting piece here. So we'll go over how to make a transform in actually changing that metadata. So in our original application our WordPress application on the core EKS cluster we were using a storage class of EBS SC and we'll show that at the end of this video here as well. But in our K3S cluster we're using a different storage class. So what we need to do in our K3S cluster before we restore it is go in and select the objects we want to alter. We'll do a replace operation and selecting the path of spec storage class name to change the actual name of the storage class this application is going to use and then change that name of the storage class to CSI post path SC. So now when that backup is restored our WordPress application is migrated to K3S automatically right from the get go it's going to know to use the CSI host path SC storage class instead of the EBS storage class. Here we're just making sure it was properly saved and once that transform is saved it can of course be reused over and over again so you don't have to worry about recreating and figuring out the paths to change those storage class once it's saved you'll have it forever on your instance of TVK. And now here we're just gonna go back and monitor that this restore process itself. So at this point it'll take about four minutes for this restore to occur. So if there are any other questions I encourage everyone to drop those in the chat now and we can have some open discussion about some of what we've seen so far. If there are no questions I can of course skip ahead since this is recorded but would like to take some time for questions if need be. So I'll go ahead and keep an eye on a chat that anyone wants to ask any of those questions. While we wait for people to hopefully type in all of those questions I would have one. Why is it important to implement effective application mobility and resiliency to cause native applications actually? Why is it important to implement security resiliency those cloud native applications without the question? So the security piece as we talked about it's one of those instances that it's like the insurance policy, right? You wanna make sure that you are protected you hope nothing ever happens but in the event that something does happen you want to make sure you are secured in terms of having a backup to restore your data. And as we talked about especially with core to edge architectures you have a lot more vulnerabilities because you have a lot more mobile clusters running out there. And so you really wanna make sure that you're prioritizing security. And then secondly that granular approach as we've talked a bit about when you're migrating applications and workloads across different clusters you're gonna have a lot of differences between those clusters you have differences of size and what they can actually manage. And so it's really important to stay organized on that granular level in terms of only migrating the pieces that you need to migrate. So that's really what solutions like Trilio are offering here in terms of a backup and migration solution all in one. So as you're backing up those applications in the core from that backup in your S3 or NFS target storage you can restore those backups into multiple edge clusters and you can get very granular in how you restore them as well. So all in one process you're getting a little bit of the security and a little bit of the migration granularity all in one tool here. Is there any downtime associated with a backup and migration process? I'm seeing that now. So the answer is no. So hooks are sometimes needed to put databases in a captureable state but essentially there is no downtime when using a tool like Trilio because of those hooks you could say that maybe extra resources are being used on the cluster. So resource optimization won't be at its peak but there will not be any downtime when it comes to that migration piece on either the core or the edge. Any other questions we have here? We'll have a few more times. We can see the target validation has occurred. We've had a validation of the backup itself just ensuring that it is a healthy backup. It looks like now that data restore process is occurring. So currently it's restoring that WordPress application into our K3S edge cluster. Another question came in. So do you buffer the transactions and play it back later? Could you elaborate a little bit more on that question? I'm not sure what you're exactly what you're asking there with the buffering transactions and playing it back later. Yes, let's see. Let me use what you like to clarify a bit so that we can answer you in the best possible way. And while we're waiting for clarifications or more questions obviously anyone if you have any questions leave them here. We have still time to get to them as well. Is there any more happening in the demo at the moment? While we wait for the questions, clarifications. Or we wait there. I'll go ahead and just skip ahead here so we can get to the end of this restore. So it seems like we don't have too many questions coming through the chat. But the ones that have come have been great. Thank you so much to everyone. Yes. Thank you, thank you. So here we can see just fast forwarding a little bit that our restore process is complete. Took about looks like six, seven minutes to complete here. Going through that data restore process and the metadata restore as well. That's part of Trilio's functionality is we capture both the metadata and the data itself because of course you would need both in order to migrate and run those workloads across different clusters. Here we can see a restore summary and a metadata summary. We can look at the exact metadata and here we're looking at our persistent volume claim and we can see that in this restore process, we successfully restored the WordPress application using CSI Hostpath SC as the storage class. Now just to confirm that part of this, excuse me, I lost my speaking there. Part of this restore process changing that storage class itself, we're gonna go back into the original backup plan. Here we're back at AngelBee plan working looking again at the original WordPress backup that we pulled this from. And we can see here that this storage class is that EBS SC. So just showing again that as you backup your core applications, you can migrate them across those different edge clusters and still alter the data itself, alter the metadata of the application itself to make sure that that restore process happens smoothly. Again, gain into that all in one security and granular migration feature here. Perfect. So let's see if we have one other question. Yeah, the question goes, I presume the system is being used while the migration slash backup is being done. How do you ensure that in-flight data that hasn't been assisted to the source makes it to the destination? Gotcha, okay, that definitely clarifies it. Thank you for that clarification. So first off, in terms of the system running while that migration and backup is being done, what Trilio does is when it captures that application to do the migration, it uses CSI snapshotting capabilities to take a quick snapshot of that application itself and capture the data as it sits at that nanosecond of the snapshot itself. It captures the data as it sits and backs that up to then be migrated. If you use hooks in your backup process, then you can make sure, however you customize those hooks, that database and that data you're capturing is in application consistent format, that you are not backing up data that's partly completes, that's partial data that you're backing up all complete data. However, you need to have it creased for your system itself. And then the later part of that question, in terms of how to ensure that inflate data that hasn't been made to the destination source. So in terms of migration, this would be a migration component for data that doesn't have as many IOPS going on per se, because once you capture that data, then you would restore it into your edge cluster. And of course, if that backup and restore process takes 10, 15, 20 minutes, whatever it be, then you would have a lag from your data there between your edge and your core itself. Now, that being said, that's how Trilio stands now in terms of backup and restore. What we're in the process of doing with a new release coming out in the next six months or so is a continuous restore feature, meaning that essentially, as those backups are occurring from your core environment, your core cluster going through the S3 target itself, and then it would be continuously restored on your edge cluster. So you would have a much more minimal, almost near zero time in between that backup and that restore happening itself, because it's more of a continuous process that's occurring. So I know that was a bit wordy there for that question, but at the core of the answers there would be, one, you can use hooks to make sure your applications are in a back-upable process before that backup occurs, or while that backup occurs. And two, once that backup and restore occurs, you would see a little bit of a lag in terms of data consistency in terms of a couple of minutes or a couple of hours, however long that process takes. But beyond that, we will have features in the future to minimize that time of the difference of data as much as possible. So that concludes my demo itself that I want to show today. I think hopefully we also got those two links dropped in the chat at some point. We had one link about a Trilio blog that we came out with talking about edge to core, excuse me, core to edge, or vice versa, migrations using Trilio. And then there was another post in there which was the Gardner report we wanted to make sure we shared with the audience as well. Yes, and if not, we are gonna post him around now. So everyone can get to the links and get started on learning more and diving deeper into these really important topics. Yeah, great. Yeah, thank you. There was a thank you for the clarification. Very wordy, but I think wordy is good as far as answering questions. Yeah, for sure. And now thank you so much for your presentation and the demo. So now we have a bit of time for the Q&A portion as well. And not that we haven't gone through Q&A during this session already. But now that we're kicking that off, everyone obviously just keep questions coming, comments and anything we're super happy to hear from you. That's why we are here. Thank you for the question so far. But to kick it off, thank you so much for the general things and everything. But then if we go a bit further, I guess, how does application mobility and RACSC improve IP operations at the edge then in general? Sure, so that application mobility and resiliency piece, improving those IP operations, obviously you want your applications to be as up to date as possible. And you want, first and foremost, you want a very easy and automated process for moving those applications, for migrating those applications. If you don't get those workloads on the edge itself, then there's not much to operate when it comes to IP operations. So that's really at the core of what we've been striving for here at Trilla. I was having that backup and then also the migration piece as being as automated and as minimal headaches as possible. Kubernetes has obviously really grown in the past couple years as we've seen, as CNCF has seen, and everyone here has seen. And so part of that is constantly developing those new tools to be able to make all of these processes, such as migration, as smooth as possible. So this is our approach, and I'd be curious to see any other approaches out there as well. Perfect. Wonderful. So another question from my side, what is an effective way to protect and migrate workloads between core and edge using Kubernetes? Right, so that protect and migrate question there, that would be at the core of what a tool like Trilla would do, because you can now be using with cloud native applications. You can not only back up those applications using the same tool, but you can migrate those applications using the same tool. So there's a variety of other backup solutions out there. This is the Trilio approach of how we go about that piece. But essentially, now you know that you can have one tool to do both the migrations and the backups themselves. Perfect, that's always nice. So now that we have seen what's the current best practices in state of core-to-edge mobility and resiliency, how do you see this base growing in the future? What does the roadmap hold for Trilio, or what will the space have coming up in the future as well? Yep, absolutely. That's a great question. I love that question. So as we saw a little bit in the beginning of our talk here today, those industry trends increasing when it comes to workloads being put on Kubernetes clusters themselves. As we all know, Kubernetes is quickly being adopted at very rapid and steady states. And so I'm sure that core-to-edge and edge computing and edge clusters are going to be a huge part of that. That's actually one of the biggest advantages of Kubernetes that people tend to talk about in the community. And so I'm assuming and expecting and excited to see that those edge clusters and those edge migration pieces growing a lot in the future. Perfect, and now we've had the links shared to the chat so everyone can get learning more as well. Is there any other kind of material if someone's getting into this space that they should look into or for beginner or advanced material that you recommend usually to people to start learning? Sure, great question. So both of these two links would be a great place to start. The first one is that Gardner report that I mentioned briefly talking about essentially how you should go about protecting your environments and your especially core-to-edge architectures against any ransomware attacks. So that would be a great place to start just to know it might not be the direction you think that needs to be addressed, but it's definitely security is day one. And so that's something that needs to be addressed immediately when thinking about your core-to-edge architectures. And then secondly, you can start off with that blog that Trilio has posted talking about using a tool like Trilio Vault for Kubernetes to go about those core-to-edge migration. So that would be an excellent place to start as well. Beyond that, if you want to learn more about Trilio specifically, you go to docs.trilio.io, but besides that, the Gardner report especially would be an excellent place to start. Or I would also say visiting some of those NIST or NCCOA and NCCOE websites as well. Perfect, a lot of deep times going to happen after this through the materials for sure. So I think this is the final call. We would have sometimes still the final call if you have any questions, comments, anyone, please keep them coming and so forth. But then do you have any final words or kind of conclusion or anything else to add now and more? I have nothing else on my end. Thank you everyone for coming and attending today, talking about core-to-edge mobility and everything that goes around core-to-edge. I know it's obviously a lot and there could be some things here that maybe we're not, you may not have thought of as being a top priority for that mobility and resiliency, but these are the two aspects that we think are the most important when looking at that mobility and resiliency, security and granularity. That's what it really comes down to. And so from that, I would say thank you for everyone. I don't think we have too many more questions here, but if there are any more, we can wait around and see as well. Yeah, for sure. Maybe from my side, final question, if there's any questions being written here. So you mentioned Helm as a great CNCF project to use as well and kind of went deeper into there. Is there any other CNCF projects, either sandbox incubating or graduated that you would like to kind of recommend in this space or see as a good complimentary to work with Helm as well? I would definitely say Helm would be the top one. I can't, I'm not thinking of my other research coming to the top of my head now, but I'd be happy to follow up maybe an email. We can take some other CNCF projects out there that are complimentary with Trilio, but Helm would be the biggest one. And Helm, we also wanted to prioritize as having those Helm-based backups because we see such heavy adoption of Helm charts. The majority of organizations we talk to, DevOps or SREs or IT ops teams are using Helm charts to deploy their applications on Kubernetes. I personally use them all the time. I think it's the smoothest you go about deploying those applications. And so we wanted to make sure that you could migrate your actual Helm chart itself along with the application. So to clarify some of that, I'm not sure actually I covered this as much during the talk, so this was a great question. When Trilio goes about capturing a Helm chart, not only do we capture all of the metadata and data of the application associated with that Helm chart, every object needed to run an application we capture, but also we capture the Helm chart itself and all revisions of the Helm chart itself. So when you're restoring a Helm chart from one cluster to another, you're getting the application exactly as it was running before in that point in time that was captured. And you're also getting all revisions of that Helm chart itself. So our compatibility with Helm charts very has a lot of depth to it. And we did that for a reason because we see a lot of adoption of Helm charts in Kubernetes. Right, now Helm is absolutely wonderful. I always recommend people to use it as well. It's a great complimentary thing for sure. But yeah, since no new questions has popped in, we did handle a lot of questions during the presentation as well. So we've been busy with the Q&A during and after here. So let's, I will start wrapping things up for today. So thank you everyone for joining the latest episode of Cloud Native Live. It's been a pleasure. It's been great having Ben Morrison talking about improved core to edge mobility and resiliency for Cloud Native applications. We, I have to say that I really love the interaction and the questions from the audience. We tackled many good clarifications as well. And as always, tune in next week as well because we bring you the latest Cloud Native code every Wednesday as well. There's actually a final question. I think we can take it before I wrap things up with the final words. So is there a simple way to create a test set to experiment with these processes? Absolutely. If you want to experiment with, I mean, I can always speak to Trilio in our approach to migration. If you want to experiment with Trilio, we have free trials you could download. And then beyond that, just any other cluster, any cluster that you have available. I know there's a lot of free trials out there to spin up a GKE cluster or anything. And they also have mini cube as well. You could, I've installed a mini cube cluster on my personal computer itself or any computer that you have, you can install a mini cube onto and start testing out those migration capabilities. You get two, of course, two different mini cube clusters. So that would be my recommendation is looking to mini cube, looking at some free trials for clusters themselves, get two of them spun up however you can and then get a tool like Trilio to then take a backup and restore applications across those clusters to test that migration piece. Great question there. Perfect. I noticed that any time that's the final call, there's usually some questions always. Because it's... Always one more. Yeah, for sure. Thank you so much. So next week, we will have Martin Wimpress presenting Building, Analyzing, Optimizing and Securing Containerized Apps. So thank you for joining us today and see you next week. Thank you, everyone.