 From the CUBE Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. Hi, I'm Stu Miniman and welcome to this CUBE Conversation. Going to be digging in talking about how storage in the software world is moving forward to the cloud native containerized environments. Happy to welcome to the program. First time guest, Paul Susman. He is the product manager for InfoScale Storage and Availability products with Veritas. Paul, thank you so much for joining us. Hey, thanks for having me on. I'm really excited to talk about what we're doing for support for containers and Kubernetes. All right, so Veritas, I think most people should be familiar with Veritas when it comes to the storage world, of course, a strong and long history. Why don't you level set us first on InfoScale? I've got way too much history going back to things like Veritas volume manager and the like, but InfoScale today in 2020, how should we be thinking of it and kind of the reach it has out in the marketplace? Yeah, first off, for InfoScale. InfoScale is a product that's used by very critical infrastructure. The top enterprise is the top 11 out of 12 airline reservation systems, the top 19 out of 20 investment banks. These are companies that use InfoScale to drive their business, not just in application but actually keep their business available and operational. So we've had a long legacy. I talked about some of the history. We are formerly known as Storage Foundation. Going back 25 years, Veritas Storage Foundation, as it was known at that time, was one of the first virtualization technologies where we virtualize storage for hard drives, right? That's where the volume manager came in. We really support for many different file systems, both clustered or shared storage, as well as non-shared storage. Came out with support for Unix, Delinix migrations, added support for virtualization technologies and came out with a lot of optimizations for storage efficiency and performance optimizations. We've been building upon that legacy ever since. We've recently come out with a lot of support for AWS cloud as well as Azure cloud and support for SAP HANA, as well as SAP Netweaver for Azure. And we have customers who are now migrating to their SAP environments up into the cloud. So long history of this, we came out with Docker support back in 2016. For Docker containers, we made a bet that Docker was gonna win. We actually built our net backup flex appliances around the Docker platform. Turns out that bet wasn't quite accurate. Turns out Kubernetes won. There's some standards now that have come out around storage and networking interfaces and the world has shifted and is picking up that standardized platform. So we're doing the same. So what we're doing, a couple different things. First off, we are coming out with a persistent storage solution for leveraging the CSI storage interface and we're coming out with a high availability solution which is leveraging some of our legacy code around VCS and around the service group technology we have and intelligent monitoring framework to monitor what's going on inside the container. And we're gonna be adding that technology into InfoScale and releasing it later this year. So that's what we're actively working on. I'm really excited about the fact that we're able to bring forward this legacy that we have where we've done it incredibly well on physical environments and virtual environments and as customers move to the cloud to also support containers. We're seeing that mission critical applications are starting to move to containers. We're having a large number of our customers come to us and saying, what's your roadmap? Where are you going on containers? We've been talking about the Flex appliance on the Netbackup appliance where we've done release support for that years ago and they're looking to actively start moving some of those mission critical apps. But what they're seeing is, is that in the container environment it's missing a lot of the enterprise capabilities that exists on physical platforms. Paul? Yeah. If I could, so I'm glad we got the news in here. Want to help if we can level set our customers a little bit on the marketplace here. So I think back to server virtualization and VMware. We spent about a decade as an industry going from, yeah, it's supported and it works with it to, how do we really optimize it and make sure it is really supported? When you talk about cloud environments, talk about containerization, we've gone through a maturation journey there also in some ways it's gone a little bit faster and we've learned from the past but it has been a journey we've been on. So you talk about Docker, Docker helped really bring containers to the masses and the enterprises especially but maybe give us a little bit as to, you throw it a couple of things like interfaces that are supported to enable storage more, how Kubernetes fits into things. Help us understand how it's not just supporting the environments but making sure that they're optimized and take advantage of the feature functionality that people are looking for and why they go to these containerized and Kubernetes environments. Yeah, that's a great thing. So first off IDC called out that containerization is actually has a potential of replacing what VMware has done around VMs and virtual machines. And that's, I think there's several driving factors for container adoption, right? It comes down to that term cattle not pets which is often used around containers where you're able to manage things at larger scale or a larger number of items. And it comes down to the fact that the container itself is a much smaller image size than a VM, it's a fraction of the size of a VM and that makes it possible to be more agile, makes it possible to have a higher density of containers versus VMs, it makes it easier to manage as well. And because of that, there's faster adoption with developers and speed and efficiency coming about where developers are making changes quicker in a container environment and that's very appealing to customers. So we're seeing a lot of interest in containers. The applications that went there first were applications that were not the typical mission critical application but were more of a web type application that didn't have a dependency on persistent data, the data was temporal. But what we're seeing now is as adoption happens more and more in the container environment and as people realize that there's a lot of advantages to this container versus a VM, they're looking to take those applications and lift and shift them to a container environment to take advantage of those benefits. And so that's what we're seeing right now. Yeah, it's really interesting, right? You know, Paul, when you looked at that virtualization adoption, it was, what a VM really did is it brought the whole operating system along with me. So inside that we have, you know, not only the operating system, but you know, typically one application but could be more as opposed to a container gets closer to that atomic unit of the application or even if it's microservices architecture, it might just be a service inside there. So I guess that brings us to the point when you talk about storage, what I really care about, I care about my data, I care about my applications. As you mentioned, often there are different, you know, different type of applications, developers are building new applications using containers. As an example, help us understand, you know, where Veritas and InfoScale fits in, what applications you're supporting today from a containerized environment or are there any things you're seeing as to, you know, hey, this is what you should do in containers and at least for certain enterprise environments, maybe we're not quite ready for certain things here yet. Yeah, so let me take a step back. If you look at the maturity and the technology shift, in my opinion, we're at today with containers where we were early on with VMs. So early on with VMs, a lot of people were saying that those virtual machines, they're not really suitable for production code, they're not suitable for mission critical applications. You really should run those on dedicated hardware. And what we've seen is actually a shift in VMs and people run pretty much everything on VMs now. It's your first platform by default, instead of a physical server. And now the same thing is kind of happening with cloud as well. In containers, what we're seeing is that the early adopters, they weren't looking for those mission critical or enterprise data requirements, things like security and scale and performance. They were okay with the status quo, but as people are starting to move things that drive their business or they're gonna run their business on, they really need those requirements. They need the same level set of enterprise capabilities that exists today on VMs and exist today on physical environments or even in the cloud. But a lot of capabilities in the cloud are, it's very secure, it's very resilient. The data is very durable. Those capabilities exists there, but on containers, they've been lacking until recently. And so what we're doing is we're trying to bring those same capabilities that our customers are used to for those customers as they're moving those mission critical applications to containers. Excellent, so let's talk about the services that InfoScale offers. When we first moved to cloud, there were some that thought, oh, hey, wait, maybe I don't need to think about things like high availability and data protection. I'll just architect the cloud that way. I think we know from like security standpoint, it's a shared responsibility model that everybody understands. When it comes to containerization also, I'm often architecting things differently. So I have to think about things a little bit different, but I don't think it removes the need for some of the services that we typically see from solutions like you offer from Veritas. Maybe give us a little bit of understanding is it the same, is it a little bit different and what is needed in today's new architecture? Yeah, that's a great question. So if you look at containers and start reading a lot of the documentation around Kubernetes, what they claim and what they point out is that the underlying storage is responsible for the high availability of the storage. It's not the requirement of the application. It's not the requirement of the IT administrator. It's they push it back on the storage. And if you look at the way storage is used or consumed with containers, it's really there's two types of storage. There is block level storage, which is presented from the disk array. The challenge with block level storage by itself is that there's no data management, right? There, what ends up happening is that the database does the data management and the database in order to take advantage or compensate for that lack of data management. Often what happens is the database is oversubscribed, so you present too much data for the database and you end up wasting space. The other side of things, the common use cases around files and the most common use case or the most way that most people use with containers is actually leveraging NFS. NFS was never designed for mission critical applications. It's really designed for very small IO and it will guarantee or maintain right consistency. But if you have multiple applications accessing the same share, who knows who's going to actually win. Somebody will win and it might not be who you want to win. So you have data corruption or data integrity issues with NFS, not to mention that you have huge performance challenges with NFS. Again, was never designed for mission critical application. And so those are areas that our customers have looked to us in the past and look to us right now to present storage, which is very high performance and very highly available and is often replicated across the metro or across geo locations, across availability zones to other data centers so that you have multiple redundant copies and so that you just don't lose data. That's something that we've done really well with InfoScale. And we've done that for applications that require shared resources and we've done that for applications that require their own repository, their own data store. So it's an opportunity for customers to use or have another storage which is persistent, highly available and higher performance for use with their containers other than NFS or block storage. Excellent. We know that the storage we always used to joke of all is that the only constant is change. In the cloud native world, we know that accelerating change is the norm. Give us the final takeaway when I think of InfoScale for Kubernetes and containers, how should we think about Veritas and what differentiates you from really the rest of the marketplace? Yeah, if you look at that, it's really simple. I mean, we have a solution which works very well for storage, very high performance, very highly available, scales really well. We are gonna be releasing a plugin for Kubernetes that will install on storage nodes and make that storage persistent and available to the application running up as a container. We're also taking the technology that we've done around our availability suite and we are taking some of the technology forward into containers. Now, understanding that Kubernetes does the orchestration, our key differentiation is that we're going to be monitoring the dependencies of what's critical for that application. All the mount points, the network interfaces, all the different processes would make up that critical application. We'll be monitoring those applications actually inside the container and then working with Kubernetes to collaborating as far as orchestration goes. So we'll tell Kubernetes when it needs to restart the container or restart a pod. Lots of advantages come with the solution and the way we're building it. Again, it integrates with Kubernetes, we monitor what's going on inside the container and we'll notify Kubernetes of an event change and we'll do that instantaneously. Kubernetes looks at the pod, they don't look at inside the container, right? They don't look at the processes, they don't look at the mount points. So the pod might be available, but the container itself, you might have lost a process, you might have lost one of the containers, one of your dependencies might have gone away and we're taking that same availability, offering that we've done very well with in the physical environment and cloud and virtual environments and bringing that forward to containers. Excellent, Paul, any minimum requirements? You know, Kubernetes of course, being open source there, there are dozens of distributions out there. So if I choose, you know, any of the native services from the public cloud providers were from my vendor of choice, I don't have to be like on 1.16 or 1.17 to get this. What are any considerations there? Well, the latest version I think is 1.18, they're coming out with 1.19 soon, Kubernetes, but Kubernetes in my view, they came out with the standards. They came out with a standard network interface and standard storage interface. We're leveraging those standards and we're building a plugin towards that standard. That same plugin will be used in Kubernetes and OpenShift and VMware, as well as all the different cloud container offerings. So our intention is to support all those. We'll be supporting Kubernetes on day one, out of the box, four Linux platforms with all the same storage capabilities that we have with InfoScale and with the same agent framework and monitoring framework that we have with InfoScale for availability as well. Excellent. Well, Paul Sossman, thank you so much. It's been great to watch the maturation of the storage environments in the container and Kubernetes world. Thanks so much for joining us. Thank you. Thanks for having me. All right, I'm Stu Miniman and thank you for watching theCUBE.