 A little bit extra time for folks to join us. Looks like that number is leveling off. Hi, everyone. I'd like to thank you all for joining us today for today's CNCF webinar, the cloud native storage for the enterprise. My name is Ariel Jitib. I'm a CNCF ambassador. I'm also a business development manager for cloud-native technologies at the company called Data. Today, I'm actually moderating the talk with a couple of my colleagues. I'd like to welcome Chris Mertz, principal technologist and George Starani, product leader for Kubernetes and cloud-native data, also in that app. A couple of housekeeping items. Before we get started, during the webinar, you're not going to be able to talk as an attendee. There's a Q&A box right at the bottom of your screen. Please feel free to drop your questions in there. And we'll get to as many of those as we can at the end. Reminder, this is an official webinar of the CNCF and as such, subject to the CNCF's Code of Conduct. Please do not add anything to the chat or questions that would be in violation of the Code of Conduct. Basically, please be respectful of all your fellow participants and presenters. Please note that the recording and slides will be posted later today to the CNCF webinar page at cncf.io forward slash webinars. With that, I'll hand it over to Chris and George. For today's presentation. Great. OK, well, thank you, Ariel. Hi, nice to be on today. Very, very happy to be on a CNCF webinar and be able to talk a little bit about how we've been enabling cloud storage, cloud-native storage, for the enterprise, including our enterprise, our customers. And today, I'll be presenting along with my colleague, George Turani. And George has been leading our product efforts around Kubernetes and cloud-native data storage for a number of years now. My name is Chris Meyers. I'm principal technologist in the NetApp Hybrid Cloud Group. And my focus has been for the last several years on DevOps and cloud-native technologies. So to get things started today, here's our list of objectives, basically our agenda for the day. We're going to talk a little bit about the state of state. George is going to kick it off with that. And then I will go ahead and talk about the inside view. How has Kubernetes and cloud-native affected our organization, NetApp, in that case? And we'll go through how it's affected us internally and spurred internal transformations. How Kubernetes and cloud-native technologies have affected our product and GTM strategic response and how we resource various products and efforts, engineering resources, et cetera. And we'll talk about a few example end-user cloud-native storage architectures that our customers are putting in place today. And then Hannah, back over to George. We'll talk about principles of cloud-native storage. And then we'll close it out. So without further ado, I will hand it over to my colleague and friend, George Tarani, for a little bit about the state of state in cloud-native storage. Thank you, Chris. Thank you. Hello, everyone. Good day. Hope you are well and safe in these crazy times. And we really appreciate you joining us for this webinar this afternoon. One of the levels said, if you could go to the next slide, please, Chris, if you don't mind. And Chris is driving today, so bear with us. Kind of level said, preaching to the choir, I guess. Stateless indeed is for TV. Maybe the only application that I know that genuinely has no state is Hello World. But any workload, I think we can agree on this, that any workload of value to the enterprise needs to be persistent. In fact, if you have a look at the Docker Hub, seven out of the 10 top images are stateful applications. So even the most basic applications, I think you require some kind of state. For instance, take Windows Notepad. You notice if you change the view option to wrap sentences. Next time you launch Notepad, it will remember us to do that exactly. So the point of the gist of what we're trying to say here is that indeed, whether you're running databases, line of business applications, you're lifting and shifting monolithic applications into containers, or even starting brand new cloud native applications, persistence and storage is a must. So with that said, and by the way, for those of you who are not familiar with the show Stateless, it's a great one. I think it's coming on Netflix, so I strongly recommend it. Anyway, with that said, let's go to the next slide, please, Chris. Very good. So they say you never know where you're going if you don't know where you've been. So in a prehistoric days of containers, the main objective was to basically accomplish the bare minimum, which was to connect external storage to your containers. So what most storage vendors did, NetApp included, was that we developed our own volume plugin. In fact, I think NetApp was the first one that published a plugin for Docker back in early 2010s, if I'm not mistaken. By 2016, I think it was noticeable that there was a shift to Kubernetes and this sort of ushered in a whole new set of users that weren't particularly storage experts. And so as a result, they wanted to manage data from the Kubernetes level, not necessarily have to do it at a storage level. In some cases, they didn't even have access to storage because, as you all know, still this is probably true that most organizations have a storage admin class that basically manage the entire storage infrastructure. But these new users wanted to bypass the storage admins all together and get to and manage the data from the Kubernetes level. And this was a catalyst for NetApp, basically transforming our open source posture and the way we contributed to the community. We merged our Docker code with a new branch of code for Kubernetes and named it Project Triton. Some of you might be familiar with it. This is, it's got a bit of a cult following. This project was the first external provisioner for Kubernetes. And now, fast-forwarding to today and what a crazy year 2020 has been, right? First, we almost started World War III. Then the Prince Harry and Meghan Markle stuck down from the Royals. And of course, now the global pandemic and COVID-19. Now, while all those events haven't directly impacted the open source ecosystem, it certainly has indirectly had an effect. I mean, for instance, QCon was postponed, unfortunately. And one could make the case that this period has allowed us to reassess our current predicament, maybe reevaluate our wants and needs. So one of those related to this particular discussion is that Kubernetes users want Kubernetes in containers and associated data with it to go together. They also want some form of a de facto standard for what makes up an application to Kubernetes. As you all know, there is not a standard today. Some folks have their own interpretations, but there isn't one that the community follows as a whole. And the application currently is a loose connection, collection of Kubernetes objects. The point is that we have some work to do as a community, but I'm glad that this community is focusing its attention to operations that matter, like application consistent backup and restore, full stack application migration, disaster recovery and so forth in a multi-cluster and a multi-cloud environment. Next slide, please. So I stole this from CNCF website. As you see, as evidenced by this chart, the cloud native storage ecosystem is very vibrant. And it's really a collection of innovative vendors who come from all walks of life, and which adds flavor and depth to how the community approaches, building out solutions for workloads and containers. Now, admittedly, some have a more profound understanding and impact because they bring in decades of experience in a multifaceted way, meaning not just solely focused on storage, but take NetApp, for instance. Now, NetApp's been around for 20-some odd years. Been sort of, and recently, in the past, recent past, the NetApp has been one of the most active contributors to the open source community. And that successfully re-engineered itself to become a hybrid cloud vendor. And we're able to draw from deep storage expertise as well as combining that with the mastery of Kubernetes. So this space, as I say, watch it, it's growing. We believe we're better aligning ourselves with customers' needs and wants, and we hope to cover some of those topics today. Here's Chris. All right, thanks, George. So bearing in mind that what I'm gonna present over the next few minutes is not necessarily in serial. This is what's happened over the last four years, and a lot of these changes have happened kind of in a flowing manner. So what I'm gonna show you in the next set of slides is basically a condensation of what we've done over the last four years as Kubernetes has gained popularity in the marketplace and eventually won the orchestrator wars. So 25 years of data storage meets cloud native, and it's actually 27, I think now, something like that. So we'll start out with this quote from William Gibson. The future is here. It's just not very evenly distributed. And that's how I would describe cloud native technologies in general, but especially cloud native storage and advanced cloud native storage architectures and requirements. Just to set the stage a little bit, for our organization, let's take NetApp IT, for instance, your standard corporate IT department. As almost as pretty much down the middle center of the road as you can get. So we came from a waterfall background. We transitioned to Agile. I've been transitioning to Agile across the board for the last number of years. And it used to look like this. You got your semi-annual releases and then you start moving to Agile to experience. But in the modern world and modern app development, our developers wanted to be able to push apps faster. We wanted to be able to get new capabilities out to our internal users with our corporate apps faster. We need to be more responsive. We need to be more responsive to the C-level, to marketing, to sales, all the things that happen in a shifting marketplace. When things change or you need to get new capabilities out to customers quickly, really the only option is to do CI CD or things that we would consider cloud-native today. Things that kind of came from DevOps and have transitioned to be baked into cloud-native. Things like CI CD, self-healing architectures, auto-scaling, stuff like that. So approaching this problem for us. So if you're gonna go cloud-native everywhere, let's discuss real quickly, what's cloud? If you're going to be cloud-native everywhere both in the public cloud and on-premises and create hybrid clouds, to be able to do Kubernetes, containers, microservices, other types of cloud-native technologies and patterns everywhere you have to basically extend the cloud. So each cloud has a compute service for VMs. There's also a block storage service, also probably an object storage service, probably also a file storage service, maybe containers as a service or Kubernetes as a service and type of star KS. And then monitoring and observability to kind of tie it all together. And every cloud follows this basic pattern of surfacing physical resources into services that are consumable as metered, paid by the drip services for end users. Well, on-premises, if you're gonna create a cloud, especially with an I-towards cloud-native, you gotta do the same thing. You gotta have a compute service. You might be doing all kinds of things with that, including serving regular VMs, maybe VDI, which is just another form of VM. These are things that corporate entities run into all the time. Then you also have this modern app development. You've got containers as a service. You've got some type of Kubernetes service involved, depending on what flavor of Kubernetes your organization is going down. And then of course, particular to corporate entities on-prem, you've got a lot of business apps to deal with. So that's a little bit of a different story, but there's the analogs for that in the cloud as well. And then object storage, file storage, block storage, monitoring observability. So cloud is a cloud, is a cloud, is a cloud. It depends on how you architect it. So moving forward, so you build the cloud, what makes this a cloud-native experience? What's really about ease of use is one big part of it. It's not just about the technology, it's about how you deliver it and how you consume it. So it calls cloudy with a chance of I-As on-premises. So folks are looking for stuff that's always available, self-service, serial lead time, containers focused, capable of workload portability so that you're not running into lead times or waiting on troubled tickets. Everything's moving pretty much generally as fast as possible in the world of machine. So what does that mean for architects? Well, you've got to have self-healing designs, ways to create catalogs easy for self-service. You want this all to be API-driven so you can integrate it with DevOps automation platforms and tools of various types. Should be Kubernetes compliant and have hybrid cloud extensions to be able to extend what's on-premises into the public clouds and back. So what does this mean for corporate IT, evolving corporate IT leaders? Congratulations, IT, you are now a cloud service provider. So these are the types of things that folks who are cloud enabling cloud native within their data center have to deal with now. So how did we go about that? We realized MetFIT basically realized, okay, now we're a cloud service provider. What are we going to do about that? So this is how we organized ourselves, kind of a DevOps model, Hubbid spoke, line of business, app dev teams, with kind of an SRE role baked into that, going into what we call our IT one cloud, right? Which is, this is not a product we're selling. So just to caveat there, this is just an explanation of what our internal systems look like for NetApp IT. So we use a lot of industry products. A lot of these, most of these should be familiar with the folks on the call. And these are things that have to do with either our internal predilections for app development, as well as our partnerships. Because our internal IT systems, we tend to drink our own champagne, eat our own dog food, that type of thing. So we try to incorporate as many of our tech partner integrations as possible. So that we're absolutely sure that we're getting primary feedback from our internal teams. That, those third party platforms are built on top of an amalgamated NetApp platform, which has some of our storage and infrastructure products in it, as well as Trident, which George mentioned earlier, which is open source. Which basically is the connector for doing persistent volume claims and intelligent storage orchestration, et cetera. For cloud native workloads. So this is what basically took several years to get to this point, but now we're running basically cloud native, container space, Kubernetes driven CI CD, et cetera, so forth, to hybrid cloud, hybrid multicloud, if you will, because we're extending into all three public clouds. So that's how Kubernetes and cloud native technologies and especially cloud native storage have impacted us internally. So what about how it impacts our product strategy and has impacted our product strategy over the last several years? Well, this is what I like to call the impetus for the huge change, you know, or the power behind it, C4. Cloud consumer consumption context. So we've got new cloud native consumers. What is their consumption context? And these customer needs that we directly from our customers and prospects translated into our product strategy over the last several years. So what we learned was that, you know, folks who organizations, enterprises who are in transition need VMs and Kubernetes to coexist in relative harmony, that's a desire. There are certain flavors of Kubernetes that are dominating the commercial marketplace. You know, you've got your Anthos and your OpenShift and various other flavors coming in, Tanzu and Pacific, all kinds of good stuff going on in the commercial sector, but of course it all comes back down to open source, right? And folks are shifting their mindset from storage to persistence layers. You know, it's not about storage volumes or runs. It's about, you know, a flexible persistence layer that I can then, you know, basically, it's programmatic. I can just, you know, put a config in a YAML file and less than a second later, there's my storage, right? So all of our future storage services and products must be Kubernetes ready. That's another thing we learned. And we need parity between the VM data services that we've been providing for years and Kubernetes data services. Also a huge, huge, huge part of this is workload mobility. We've been talking about app mobility, ad infinitum for a number of years with containers. And that's all well and good. You know, as George talked about at the top of the talk, you know, stateless apps work really well that way. You basically kill them here or bring them up over there and it looks like it moved. But what if there's data associated with that? What if there's a tight coupling of data associated with that application? What if there's a data-rich application? Then in order to make that portable, you actually have to move the app and the data in concert. And that's workload mobility. True workload mobility is hard to come by. And the folks who've been doing it today are generally rolling their own or using one of the, you know, the non-standard capabilities that exist today. But this is a problem that is currently being solved that is not solved yet. So how did that translate into our product strategy? Well, we became a founding member of the CNCF. And I'm saying this is not a drag slide. I really, I want to get this across. This is simply me sharing what we did as corporate entity to respond to a shift in the marketplace. And we've been very successful with that. So I just wanted to, you know, kind of give you give you all the insight. So I joined the CNCF founding member, significant ongoing engineering investment in our CSI Trident provider, which is open source. Warren, welcome to come take a look, download it from GitHub, submit changes, whatever you'd like. We had contributions to the creation of the CSI also ongoing. We did mandate and it provide Kubernetes integration for our current future products and services. That's not a blanket statement, but all those that make sense. And then we're still going through a massive application modernization effort that includes all of our base products. So, you know, constantly working with engineering teams and seeing how they're basically containerizing and turning our service, you know, various products that maybe sell architectures into microservice architectures. And some of them were born that way, Greenfield and the rest were transitioning. Also, you know, made great efforts to work with the public cloud providers to provide tier one storage services and partnerships, validated hybrid architectures with self saying partners and other third party vendors. This is to cover the commercial enterprise space and the spread, you know, the goodness of Kubernetes and cloud native technologies as far as we can. And of course, you know, microservice architectures underpin a lot of our cloud data services that are in production today. So here's just quick example, not a much customer use case. It's a major global network vendor. And I was able to kind of help set the stage for the architecture here for their internal development cloud. And it's basically as straightforward as you get. You got developers producing, you know, containers which are then orchestrated by Kubernetes and then tried into the peanut butter layer that attaches the persistent storage to the entire system. And this allowed them to accelerate their timelines and prove their scalability and reliability, et cetera. Also, global IT services vendor, you know, this is a more generic use cases. It allows them to provide cloud native services to their customers around the globe. And they chose to, you know, use OpenShift, which is an upstream brand of Kubernetes or close to it. And, you know, using the right type of cloud infrastructure on-premises enabled them to basically revamp their global systems for modern application development. Another example is a media and entertainment company in Amia and they are heavily data-driven, lots of media and data-rich applications, all orchestrated by Kubernetes and most of their, you know, applications are obviously container-based. And, you know, in this case it was more complex. They were using a lot of object storage. They wanted to make it geo-distributed, created containers as a service platform, as they called it. And, you know, to ensure there was data mobility between their systems. And this allowed them to, of course, you know, achieve the faster performance and, you know, the ROI improvements that they were looking for. So, you know, I talked a lot about, you know, how Kubernetes has basically informed our own internal transformations, how it's affected our product strategy, go to market strategy, partnership strategy, as well as how this is translated into, you know, our customers' transformations and basically movements towards cloud-native technologies and cloud-native storage. So, now we're sitting at this point and what I've noticed is that right now, even acceleration is accelerating. There's a lot that either, right now, it can be an opportunity to accelerate some of the changes and transformations that we've been undergoing in the enterprise and corporate environments. And so I think it's time to kind of look at the horizon line and say, okay, what's next? Are we gonna have standards for workload mobility? What is the community going to settle upon? And with that, I would like to hand it back over to George and he can show us a bit more about what's next. Thank you, Chris. You know, one thing you said that actually resonated with me, honestly, and that is things are accelerating much faster than before, aren't they? I remember when I was a kid, the world changed every 10, 15 years. Now it seems it's changing every month. Yeah, no, it seems like it's, what's terminal velocity? It's basically where the acceleration is accelerating. It's a bit mind-blowing at times, but yeah. Take it away. Sure, thank you. So we wanted to share with you just what we've learned from you and folks like you in the community and of course our customers. And I realized the priority of this list might vary, it's relative, but I think the characteristics of what essentially make up an ideal cloud native storage infrastructure for a Kubernetes environment, I think these generally cover the entire gamut. So cloud native should be highly available and, you know, with tiered performance SLA, it needs to know that it's running inside containers. It should, you know, wholly and completely embrace upstream Kubernetes and avoid deep integration or customization with branded distributions. There's nothing wrong with it, but we believe in order to, you know, in the spirit of keeping things open, you know, we embrace as an organization upstream Kubernetes. Last but not least, the world has changed, as we just said. Not everything's on cram anymore. Not everything's going to be in the cloud either. It's going to be somewhere in between. We go to the next slide, please, Chris. So just kind of drilling down a little bit on these, what does that, what do I mean by availability and high performance? You know, SLA's are of course important, right? You need robustness in the organization to make sure your applications stay up. And that is, that should also apply to your cloud native storage. Performance, of course, you need, whether it's IOPS latency or bandwidth, all these characters and parameters are important to maintaining a storage infrastructure that is consistent with the level of consumption as far as the applications are concerned. Good cloud native platforms should, you know, be supported by their respective vendors because not all organizations can solely depend on their own internal IT to do the problem solving, do the integration and consulting and so forth. At the end of the day, particularly for multinationals and large organizations, be it your financial institution, your automotive manufacturer or any other top vertical, having that support, of course, on a global scale is critically important. And the operational experience that I'm referring here is really that sort of that SRE sort of angle, right? It's one thing to know how one's hardware or software is architected and what its limitations are and how it operates, but to actually do it in a real life situation in a real data center, managing a variety of different types of use cases across the board. That level of experience, it's got to be sole service with all kinds of automation and have an observability angle, right? It's very important that the infrastructure teams understand their place in terms of where they are. Critically important to the organization when it comes to standing up services. But I think at the end of the day, the ideal scenario is to make the consumption model such that as users, we don't necessarily have to go to the source every time we want to make a quick modification or make a request. And last but not least, it needs to be programmable, right? If you want to deploy this at scale, you have to have the mechanism and the tooling to be able to do this in a fully automated way. Next slide please. So why it's important for the cloud native stories to know that it's running in a container environment because it understands what the consumption model is. It really goes back to that. And it also needs to integrate with the story services in the cloud. I read somewhere recently that 47% of enterprise workloads now run in some form of fashion in the cloud. If that's the case, then this is the new reality and as such, the cloud native infrastructure also needs to understand it, which means that if you're running your application today on-prem and you decide to the next day run it in the cloud, there has to be toolings and mechanisms and software and intelligence that knows how to move data back and forth in a way that it stands up the application in its entirety, not in pieces, not just volumes, not just the application cause, it's got to be a complete solution. It also needs to do a robust and solid cloud native storage infrastructure needs to provide traditional storage tools for the infrastructure folks. So as I said, the storage teams aren't going away. You'll always need that level of expertise. You may not need them for every particular, you know, opportunities in order to manage storage at scale. And of course, bringing it back to the user and application level. Basic data management functions and data manipulation capabilities have to be available at the user level from the Kubernetes sort of perspective. Snapshots shouldn't be exclusive to storage administrators. As a development manager, cloning capabilities should be readily available to the users and not something that is exclusive to the storage team. So some of the other characteristics that we think are critical. Next slide, please. Native integration upstream and downstream. What I really mean by this is obviously CSI is a great step forward. It sort of democratizes the way container orchestrators interface with the underlying storage. But of course it's early days and it needs to mature more and we realize as participants in the community and vendors who cater to customers that we need to do more and we need to do better. So that part, I think it's gonna take its own course and I think everyone is comfortable with the idea. But in terms of additional, surfacing up additional capabilities and features that exist in the storage infrastructure, the idea is I hinted to this earlier embracing the sort of pure upstream Kubernetes is important. Now you may choose to go to distribution that has particular tools and provides additional application level feature sets that are important. So long that the container orchestration piece is untouched and is consistent with the upstream Kubernetes, I think that is the key factor that differentiates it. And the other part is downstream with the actual, you know, the integration itself in the case of NetApp, for instance, we talked about Trident. Trident is NetApp's integration for Kubernetes and see what it's comprised of, what it's doing. So there are no sort of pockets of proprietary IP, intellectual property in those integrations. Next slide please. And I believe this is the last one of the four, right? So I hope you guys can hear me okay. Apologies if I'm dropping off. I sneak to lose packets here and there. So, and finally, the hybrid cloud use case, the cloud-like on-demand elastic and scalability in a way that how consumers use run their applications in the cloud. And of course, the Kubernetes infrastructure that supports those deployments, right? So whether they're hosted or gone prem or hybrid environment, the idea behind a genuine cloud-native storage infrastructure is to be able to accommodate all of those. Now, there are many tens of Kubernetes distributions that are out there, but it's important that the native to a particular cloud service providers, as well as the major distributions. Can you guys hear me okay? Sorry about that. I'm getting some message that some of you may not hear me. So let me just repeat myself one more time. My reference was to the cloud-like consumption of storage in line with how applications are consumed. And of course, last but not least, basic data management capabilities when it comes to storage. Being able to back up your important data, but not just the volume itself. This is the data in the context of full stack application consistency. So that you could back up your data application data and stand it up in a different cluster or in different cloud altogether in its entirety for a variety of reasons. Whether you wanna have a, be able to quickly switch over to a different environment and or you're doing this as a means of, for instance, if you're having a, Chris, could you switch over to the next slide please? Guys, apologies, my bandwidth seems to be a little bit unreliable here. I hope you can hear me. I'm gonna hear you, Josh. Okay, very good. Sorry about that. I'm actually connected directly to my router here, but I think everyone's home these days and this is understandable, right? I know how many of you folks get on Zoom or various other sort of collaboration software these days. And so freezing and losing packets that seems to be a commonality these days. So I apologize. In any event, one additional sort of component that I might suggest in terms of what to consider for your cloud-native infrastructure is to understand and sort of vet the idea of making sure that these, this type of storage provides the coverage for most, if not all use cases. And there's some in the deployment model, whether or not it runs in a cloud or on-prem and or as far as the storage back in itself is concerned. Chris, if you could go to next slide please? Excellent. So just wanted to close out and again, I hope you guys can hear me and apologies for the unreliability of my internet connection here. So I wanted to just quickly talk about Trident. We made references to it a number of times. This is a MedApps integration for Kubernetes. It's an open-source project and it's been around, I guess by Kubernetes standards for Eons. It's been around since 2016 actually. So it's a Kubernetes native application, which is to say it runs as a container in a cluster. The operator, the Trident operator monitors the obviously Kubernetes API server. And when it sees a storage request or a PVC comment, it processes that in real storage backing. It quickly sets up paths between the containers and the back-end MedApps storage. Trident is, as I said, it's an open-source software solution. It leaves a very small footprint on every node. So you only need to run one copy of Trident per cluster. So it's very lightweight. And I think importantly from the performance standpoint, because it is a control object very much like Kubernetes itself, Trident's not in the data path. So it really doesn't have any impact on performance. Chris, back to you. All right, thanks, George. Just as George mentioned a little bit about Trident here. And if you'd like to learn more about the open-source project, try to go to netapp.io or check out our GitHub at github.com slash netapp slash Trident. And with that, I'd like to just leave you with this consideration. It's not the most intellectual of species that survives. It's not even the strongest that survives, but the species that survives is the one that is able to adapt and adjust best to the changing environment. That is a quality that humans have above all is adaptability. And what we're finding is that tech companies and enterprises and corporations in the space, especially those that've been around for quite some time, can choose to either adapt or perish. So with that, for more information, if you'd like to join us in our developer and open-source community, there's netapp.io or the pub. There's also cloud.netapp.com, which is where you'll find out more about various things that we're doing in that space. Also our GitHub, Slack, Twitter, and by all means, please reach out to us more than happy to engage in conversation. And I think that is it. So I'm gonna turn it back over to Ariel and I think we're gonna do some Q&A. Yeah, so awesome. Awesome, thanks, Chris and George, for the presentation. We have some time for questions again. You can use the Q&A tab down at the bottom or just drop them into chat. And listening to talk, especially kind of dovetailing a little bit on your part, I wouldn't quote Chris. Can you speak a little bit to the barriers that the IT team and netapp encountered in adopting cloud-native technologies? Yeah, sure can. I think that, you know, I've talked many, many times with our IT team about their transformations over the years, starting with DevOps and then moving into cloud-native and really the biggest barriers that they encountered were mostly cultural. You know, getting folks to understand that just because you're changing consumption context, that doesn't mean that you're necessarily that what you know is becoming obviated or outdated. It just means that you're grafting new best practices and new modalities onto a worldview that you already have. So once people start viewing these transitions as career enhancers, career enablers, growth opportunities, rather than necessarily being concerned about, you know, resisting change, I mean, to put it bluntly, that's where you get past the first big bottleneck. And after that, it becomes a matter of assessing the organization's needs and then seeing what are your biggest blockers to tackle? What's gonna be the biggest blocker that's in the way technically? And fortunately, thanks to the work that had been done with Trident, you know, in parallel, we grew the wheels a lot on that one. Cool. And to dig into that, into Trident and storage a little bit, can you all kind of discuss a little bit, some best practices for data backup and recovery in this cognitive context? George, do you wanna handle that or? Of course, yeah. Sure, be happy to. Hope you guys can hear me. Yes, absolutely. So actually to the Trident users out there, I have one advice, and if you don't already know about this, we have a website which we affectionately called the pub, which is netapp.io. This is the home of all things open source, netapp, including Trident. And as part of that, we actually have published a four part series of technical blogs that specifically cover that very topic, Ariel, which is backup and restore by the way of using Trident. Now Trident itself, as I mentioned at the onset, is not an application or a backup application, I should say, it is really generally the means of integrating your Kubernetes environment to the backend network storage that is based on netapp storage systems. But we do have best practices that specifically cover that. And actually, if you don't mind, I'm gonna make a quick little segue and plug something to those folks who are still listening to us. On the 22nd of April, netapp is making some interesting announcements in our opinion. It's something that we're quite excited about. And it particularly targets this very topic, which is about application awareness, application. Yeah, what George is talking about is our, yeah, we are definitely gonna be talking more about this next week. So tune in on the various channels and it's pretty exciting. Okay, all right, cool. So thank you so much for the presentation again. And thanks to all the attendees for joining us today. Again, the webinar recording and the slides will be available later today. And we look forward to seeing you at the next CNCF webinar. Have a great day. Thanks, Ariel. Thanks, everyone.