 Hi everybody, we're here at VeeamON 2022. This is day two of theCUBE's continuous coverage. I'm Dave Vellante. My co-host is Dave Nicholson, a ton of energy. The keynotes, day two keynotes are all about products at Veeam. Veeam, the color of green, same color as money. It flows in this ecosystem. I'll tell you right now, Michael Cade is here. He's the senior technologist for product strategy at Veeam. Michael, fresh off the keynotes. Welcome, Janney Allen's keynote was fantastic. I mean, that story he told blew me away. I can't wait to have him back. Stay tuned for that one, but we're going to talk about protecting containers, Castin. You guys got announcements of Castin by Veeam, you call it, K10, version five, I think. Yeah, so just rolled into five.io release this week. Now it's a bit different to what we see from a VBR release cycle kind of thing because we're constantly working on a two week sprint cycle. So as much as five.io's been launched and announced, we're going to see that trickling out over the next couple of months until we get round to KubeCon.io and we do all of this again, right? So let's back up. I first bumped into Castin, gosh, it was several years ago with a Veeam on. I'm like, wow, this is a really interesting company. I had deep conversations with them. They had a Cheshire cat grin, like something was going on and okay, finally you acquire them, but go back a little bit of history. Why the need for this? Containers used to be ephemeral. You didn't have to persist them. That changed, but you guys are way ahead of that trend. Talk a little bit more about the history there and then we'll get into current day. Yeah, I think the need for stateful workloads within Kubernetes has absolutely grown. I think we just saw 1.24 of Kubernetes get released last week or a couple of weeks ago now. And really the focus there, you can see at least three of the big ticket items in that release are focused around storage and data. So it just encourages that the community is wanting to put these data services within that. But it's also common, right? It's great to think about a stateless application. If you've got a stateless application, but even a web server's got some state, right? There's always going to be some data associated to an application. And if there isn't, then great, but that doesn't really work. No, but you're right, where'd they click? Where'd they go? I mean, little things like that, right? Yeah, yeah, exactly. So one of the things that we're seeing from that is obviously the requirement to back up and putting a lot of data services in there and taking full exposure of the Kubernetes ecosystem, HA, and very tiny containers versus these large virtual machines that we've always had the story at Veeam around the portability and being able to move them left, right, here, there and everywhere. But from a K10 point of view, the ability to not only protect them, but also move those applications or move that data wherever they need to be. Okay, so, and Kubernetes, of course, has evolved. I mean, the early days of Kubernetes, they kept it simple, kind of like Veeam, actually, right? And then, you know, even though Mesosphere and even Docker swarm, they were trying to do more sophisticated cluster management. Kubernetes is now got projects getting much more complicated, so more complicated workloads, mean more data, more critical data, means more protection. Okay, so you acquire Castin. We know that's a small part of your business today, but it's going to be growing. We know this, because everybody's developing applications. So what's different about protecting containers? Danny talks about modern data protection. Okay, when I first heard that, I'm like, eh, nice tagline. But then he peeled the onion, he explains how in virtualization you went from agents to backing up a Veeam-ware instance, a virtual instance. What's different about containers? What makes modern, what constitutes modern data protection for containers? Yeah, so I think the story that Danny tells as well is, so when we had, we had our physical agents and virtualization came along and a lot of, and this is really where Veeam was born, right? We went into the virtualization API, the VMware API, and we started leveraging that to be more storage efficient. The admin overhead around those agents weren't there then. We could just back up using the API. Whereas obviously a lot of our competition would use agents still and put that resource overhead on top of that. So that's where Veeam initially got the kickstart in that world. I think it's very similar to when it comes to Kubernetes because K10 is deployed within the Kubernetes cluster and it leverages the Kubernetes API to pull out that data in a more efficient way. You could use image-based backups or traditional NAS-based backups to protect some of the data, and backups kind of the, it's only one of the ticks in the boxes, right? You have to be able to restore that and know what that data is. But wait, your competitors aren't as fat, dumb, and happy today as they were back then, right? So I can't, they use the same APIs, and so what makes you guys different? So I think that's testament to the Kubernetes and the community behind that and things like the CSI driver which enables the storage vendors to take that CSI abstraction layer and then integrate their storage components, their snapshot technologies, and other efficiency models in there and be able to leverage that as part of a universal data protection API. So really, that's one tick in the box and you're absolutely right. There's open source tools that can do exactly what we're doing to a degree on that backup and recovery where it gets really interesting is the mobility of data and how we're protecting that because as much as stateful workloads are seen within the Kubernetes environments now, they're also seen outside. So things like Amazon RDS but the front end lives in Kubernetes going to that stateless point. But being able to protect the whole application and being very application aware means that we can capture everything and restore wherever we want that to go as well. So the demo that I just did was actually a Postgres database in AWS and us being able to clone or migrate that out into an EKS cluster as a stateful set. So again, we're not leveraging RDS at that point but it gives us the freedom of movement of that data. Yeah, I want to talk about that, what you actually demoed. One of the interesting things we were talking earlier, I didn't see any CLI when you were going through the integration of K10 V5 in V12. Yeah. That was very interesting. But I'm always skeptical of this concept of the single pane of glass and how useful that is. Who is this integration targeting? Are you targeting the sort of traditional Veeam user who is now adding as a responsibility the management of protecting these Kubernetes environments? Or are you at the same time targeting the current owners of those environments? Cause I know you talk about shift left and nobody needs Kubernetes if you only have one container and one thing you're doing. So at some point it's all about automation, it's about blueprints, it's about getting those things in early. So you get up, you talk about this integration, who cares about that kind of integration? Yeah, so I think it's a bit of both, right? So we're definitely focused around the DevOps focused engineer, let's just call it that under an umbrella, the cloud engineers looking after Kubernetes from an application delivery perspective. But I think more and more, as we get further up the mountain, CIS admins obviously who we speak to, the tech decision makers, the solutions architect systems engineers, they go into inherit and be that platform operator around the Kubernetes clusters and they're probably going to land with the requirement around data management as well. So the specific VBR centralized management is very much for the backup admin, the infrastructure admin or the cloud based engineer that's looking after the Kubernetes cluster and the data within that. Still we speak to app developers who are conscious of what their database looks like because that's an external data service. And the biggest question that we have or the biggest conversation that we have with them is that the source code, the GitHub or the source repository, that's fine. That will get some of the way back up and running. But when it comes to a Postgres database or some sort of data service, well that's out of the CI CD pipeline. So it's whether they're interested in that or whether that gets farmed out into another, the operations, the traditional operations team. So I want to unpack your press release a little bit. It's full of all the acronyms, so maybe you can help us Cypher. You've got security everywhere, and hence platform hardening including KMS, that's key management services. Okay, with AWS KMS and HashiCorp, Vault, awesome, love to see Hashi, hot company. RBAC objects in UI dashboards, ransomware attacks, AWS S3, so anyway, security everywhere. What do you mean by that? So I think traditionally at Veeam and continue that right, from a security perspective, if you think about the failure scenario and ransomware is the hot topic when it comes to security, but we can think about security as, if we think about that as the bang, the bang is something bad's happened, fire, flood, blood type stuff. And we tend to be that right hand side of that. We tend to be the remediation. We're definitely the one, the last line of defense to get stuff back when something really bad happens. I think what we've done from a K10 point of view is not only enhance that, so with the likes of being able to, we're not going to reinvent the wheel, let's use the services that HashiCorp have done from a HashiCorp Vault point of view and integrate from a key management system, but then also things like S3 or ransomware prevention. So I want to know if something bad's happened and Kerson actually did something more generic from a Veeam 1 perspective, but one of the pieces that we've seen since we've then started to send our backups to an immutable object storage is let's be more of that left as well and start looking at the preventative tasks that we can help with. Now we're not going to be a security company, but you heard all the way through Danny's keynote and probably when he's been on here is that it's always, we're always mindful of that security focus. So on that point, what was being looked for, a spike in CPU utilization that would be associated with encryption? Yeah, exactly that. So that could be, from a virtual machine point of view, but from a K10 specifically, is that we're going to look at the S3 bucket or the object storage and we're going to see if there's a rate of change that's out of the normal, it's an abnormally. And then with that, we can say, okay, that doesn't look right, alert us through observability tools, again, around the cloud native ecosystem, Prometheus Grafana, and then we're going to get insight into that before the bang happens, hopefully, before the bang happens. So that's an interesting one we talk about. And Jason Cease, and moving into this area of security specifically. You're talking to Zease about that too. Exactly, that's that sort of creep where you can actually add value, it's interesting. So, okay, so we talked about shift left, get that, and then expanded ecosystem, industry leading technologies, by the way, and one of them is the Red Hat Marketplace. And I think I heard Anton's, Anton was amazing. He is the head of product management at Veeam, has been to every Veeam on. He's got family in Ukraine, he's based in Switzerland, but he didn't, he chose not to come here because he's obviously supporting the carnage that's going on in Ukraine. But anyway, I think he said that the Red Hat team is actually in Ukraine developing, while the bombs are dropping, that's amazing. But anyway, back to our interview here. Expanded ecosystem, Red Hat, SUSE with Rancher, they've got some momentum, VeeSphere with Tanzu, they're in the game. Talk about that ecosystem and its importance. Yeah, I think, and it goes back to your point around the CLI, right, is that it feels like the next stage of Kubernetes is going to be very much focused towards the operator or the operations team, the sys admin of today is going to have to look after that. And at the moment, it's all very command line, it's all CLI driven. And I think the marketplace is open shift being our biggest foothold around our customer base is definitely around open shift. But things like, obviously, we're a long standing alliance partner with VMware as well, so they're Tanzu operations. Actually, there's support for TKGS, so VeeSphere, Tanzu Grid Services is another part of the big release of 5.0. But all three of those and the common marketplace gives us a UI, gives us a way of being able to see and visualize that rather than having to go and hunt down the commands and get our information for something new. Oh, some people are going to be unhappy about that. But I contend the human eye has evolved to see in color for a very good reason. So I want to see things in red, yellow, and green at times. There you go, yeah. So when we hear a company like Veeam talk about, look, we have no platform agenda. We don't care which cloud it's in, we don't care if it's on-prem or Google Azure, AWS. We had Wasabion, great, they got an S3 compatible target and others as well. When we hear them, companies like you talk about that consistent experience and single pane of glass that you're skeptical of, maybe because it's technically challenging. One of the things we call SuperCloud, that's come up, Danny and I were riffing on that the other day, and we'll do that more this afternoon. But it brings up something that we were talking about with ZEUS, Dave, which is the edge, right? And it seems like Kubernetes, and we think about OpenShift, we were there last week at Red Hat Summit. That's like 50% of the conversation if not more was the edge, right? And really true edge worst cases, use cases. Two weeks ago we were at Dell Tech. There was a lot of edge talk, but it was retail stores like Lowe's. Okay, that's kind of near edge, but the far edge, we're talking space, right? So it seems like Kubernetes fits there and OpenShift particularly, as well as some of the others that we mentioned. What about edge? How much of what you're doing with container data protection do you see as informing you about the edge opportunity? Are you seeing any patterns there? Nobody's really talking about it in data protection yet. So yeah, large scale numbers of these very small clusters that are out there on farms or in wind turbines and that is definitely something that is being spoken about. There's not much mention actually in this 5.0 release because we actually support things like K3S, EKS anywhere. That all came in 4.5, so I think to your first point as well, David, is that we don't really care what that Kubernetes distribution is. So you've got K3S lightweight Kubernetes distribution, we support it because it uses the same native Kubernetes APIs and we get deployed inside of that. I think where we've got these large scale and large numbers of edge deployments of Kubernetes and they require potentially some data management down there and they might want to send everything into a centralized location or a more centralized location than a farm shed out in the country. I think we're going to see a big number of that but then we also have our multicluster dashboard that gives us the ability to centralize all of the control plane. So we don't have to go into each individual K10 deployment to manage those policies. We can have one big centralized management multicluster dashboard and we can set global policies there. So if you're running a database and maybe it's the same one across all of your different edge locations where you could just set one policy to say, I want to protect that data on an hourly basis, a daily basis, whatever that needs to be, rather than having to go into each individual one. And then send it back to that central repository. So that's the model that you see. You don't see the opportunity, at least at this point in time, of actually persisting it at the edge. So I think it depends. I think we see both. And but again, that's the footprint. And maybe like you mentioned about up in space having a Kubernetes cluster up there, you don't really want to be sending up a NAS device or a storage device, right? To have to sit alongside it. So it's probably, but then equally, what's the art of the possible to get that back down to our planet as part of a consistent copy of data. Or even a farm or other remote location. The question is, I mean, EVs, we believe there's going to be tons of data. You think about Tesla as a use case. They don't persist a ton of their data. Maybe if a deer runs across the front of the car, oh, persist that, send that back to the cloud. I don't want anyone knowing my Tesla data. I'll tell you that right now. Well, there you go, that one too. All right, well, that's future discussion. We're still trying to squint through those patterns. I got so many questions for you, Michael, but we got to go. Thanks so much for coming to theCUBE. Great job on the keynote today and good luck. Thank you. Thanks for having me. All right, keep it right there. A ton of product talk today. As I say, Danny Allen's coming back. We got the ecosystem coming, a bunch of the cloud providers. Island was up on stage. They were just recently acquired by 1111 Systems. They were an example today of a cloud service provider. We're going to unpack it all here on theCUBE at Veeamon 2022 from Las Vegas at the Aria. Keep it right there.