 You know what time of year it is. You know what time of year it is. It's KubeCon time and I know what that means. That means pancakes, pancake time, pancake time global virtual EU style. Man, I'm excited for our discussion today talking all about IT, developer speed, open source. We're going to cover the range of topics with our guests. Coming on first is ITZIK, right? VP, VP technologist Dell Technologies. ITZIK, it is great to have you here today. Hey, thanks for having me and then I didn't brought the pancakes since it's almost possible but here's my matzo bowl. Good. Good call to bring the matzo. I like it. I like it. And our next guest is Nivas. I'm a senior principal project manager Dell Technologies. Nivas, hello, hello, hello. Hello, Alex. It's great to be here. It's 7 a.m. here in California and I got pancakes. I love it. I love it. Cheryl Hong, vice president of Nivas. I love it. I love it. Cheryl Hong, vice president of Nivas. A veteran of the pancake breakfast circuit. Hi, Cheryl. Hey, Alex. Hey, Alex. It's fantastic to be back on again and I likewise have bought my own sourdough pancakes. So happy to be here. And as I understand, your husband got full on into the sourdough bread world that we saw during this pandemic. He did. I've been the lucky recipient of lots of sourdough baking over the last year. So yeah, some of that has gone into today's pancakes. Nice. Well, I've got some pancakes too, but you know what? I need some syrup. And I'm going to take a... So Cheryl, do you have some syrup you could provide me? Well, by a lucky coincidence, I just happened to have some pure maple syrup, which I'm handing to you right now, Alex. Oh, thank you very much. Wow. I'm going to put a little bit on my pancake, but then I'm going to save my pancake for later, but I'm just going to add a little bit, a little drizzle here. Okay. By the magic of the internet, I will also... The magic of the internet. I like it. We have a cloning factor here, a teleporting cloning. Well, listen, I'm excited for this discussion today. Again, it's IT, really developer strategies and open source. And I think that does relate a lot to speed. It relates a lot to ecosystems. And we're looking in particular at persistent data and the role developers play in this world where persistent data is so critically important, especially in a Kubernetes environment, the stateless Kubernetes environment. And Kubernetes, as we well know, as we well know, has emerged as that orchestration engine, rat scale architectures. A lot of our friends and colleagues are here listening in. So hi, everyone. Let's just say hi to Ella Kucom folks out there. Thank you so much for showing up. We're hoping we can get back into the real world at some point soon, but for now we are enjoying our breakfast here. And I want to start by talking about what has happened over the past 18 months. What is going on with IT and developer strategies that we're seeing, especially with the advent of Kubernetes and cloud native architectures. One of my observations is Kubernetes is now just there. And now it's like, oh, well, we need, you know, we're now seeing this kind of continual need for tooling. And the tooling is starting to become much more important than how you think about how you actually manage observability, for instance, or how you manage persistent storage. So, Issac, what do you say? What's your view on this world right now? So, yeah, I think that the biggest thing that I'm seeing from talking to many, many customers almost daily these days is that developers' patient or, should I say, impatient toward getting storage is on the rise, which means that they want to treat storage just like we all treat electricity in our houses. We want to flip a button and within a magic that's going to occur, you know, storage will basically become persistent to those containers. And of course, in order for this to come, storage needs to be super smart, but it's also super simple to use. The other thing that I'm seeing is that, you know, we've been talking about door care and containers and Kubernetes for some years now. But at least for the customer's conversation that I'm having in the past 18 months, it's really becoming mature. Like you see almost every customer using Kubernetes these days because it's the leading orchestrator that is out there. So every conversation is almost already getting into a step where the customer know what they want to use, but they're just looking for ways to ensure that the storage array will become an integral part of this upper level orchestrator that they are using. Yeah, just to add to it, I mean, I have a slightly, you know, a take that I have been doing some analysis is then the traditional architecture that IT has been developing, the three-tier architecture is usually storage has been kind of like third tier, right? It's basically managed as an Oracle instance or SAP instance by a storage admin, a database admin, a backup admin. So they maintain the health of the entire, you know, of the data and they maintain the recoverability. They make sure that it's highly available and all that. So they know microservices, well, all that work is being split across multiple microservices. So now you're seeing that each individual small development teams are managing their own databases inside their microservice. And so there isn't a centralized data hygiene that's being done. So now in Kubernetes, that's why I see a huge opportunity and just to add to its exploit, they're looking at storage on demand, but at the same time, it also needs to be something that is available, resilient and reliable and decolorable. Before I go to you, Cheryl, I do want to let everyone know out there that they can ask questions. You out there in Kubernetes land, ask your questions here about what you're hearing, about what's on your mind. We have some pros here who are ready to answer them. Cheryl, the question I have, you know, I saw you post something the other day about some of the hot subjects out there, like hot topics. You wrote about GitOps as one. And I'm thinking about persistent data and you have a background in engineering and persistent data has become so important. How does it rate on your list and how do you view it? And what projects do you see out there from in the CNCF that are interested? So, Alex, that list that you're referring to was the 10 predictions that I made for Cloud Native in 2021. And I agree actually with both Isaac and Nivas that deployments are getting bigger. They're getting more complex. People are expecting more out of their Cloud Native infrastructure. One of those challenges is persistent storage. I started looking at storage in 2017, in fact. And I thought it would be a solved problem within about 18 months. I was obviously optimistic in my prediction back at that point. But I do see now that organizations are getting to the point where it's not scary or not as scary anymore to consider running databases and running some parts as persistent, using persistent volumes and persistent workloads. But one thing that I do see that is I think still not yet solved is as we move towards a multi-Cloud world, how and what that means for storage, because moving stateless applications across Cloud is not that difficult, but moving data and moving the analytics, that is very, very difficult still. Nivas, you're shaking your head. I saw that. Yeah, this is definitely, I mean, I see the point that Cheryl is making, especially when we have a multi-Cloud, I look at applications being decomposed across each Cloud is specializing its own. For example, Google is specializing a lot in machine learning type APIs, AWS has its own. And then so you have all these services across different Clouds, but the data is not very easily, can spin up a data cluster and migrate data very quickly. And this is something that we have realized from I would say at the very outset and so as part of our data layer on data protection standpoint, and it can add from the primary storage standpoint, that we have a concept of where we can replicate data. So we can replicate the data across multiple of these Clouds and in a very, it's almost like drinking data out of a straw. So you need a very, very highly dedu, highly compressed format, very efficient kind of a layer that can make sure that the data is consistent across all these different Clouds. And that's where we provide some of the capabilities that we've been working on in the next time. What are some of those capabilities that you've been working on? And how do you think of this in terms of the roles that people play in terms of the culture that you need to really have? Yeah, sure. Thanks for the question. So really, my answer will be divided into two. I'm a big believer in the dual mirror mentality, which means that if I'm a developer, there is a very small chance to nonexistent at all that I'm going to use the storage user interface to carve up a volume or to take a snapshot out of it. My portal to the Kubernetes world is one command, Cube CTL. So I want to consume all of those services via Cube CTL. And what are the services? So apart from just the basic ability to map a volume, think about the other advanced functionality like snapshot, taking a snapshot of a volume and represent that data to your AI workload. Provide quality of services mechanism as an IT administrator. So you will know that your developer is not going to consume the entirety of the storage capacity or performance. Neva's already mentioned replication, which of course is also one of those features that some customers are looking for. But there is also another feature, reporting capabilities. In the Kubernetes world, there are leading open source reporting capabilities that customers want to monitor Kubernetes and the storage array and maybe other services running in their Kubernetes pods. So they want the storage array reporting capabilities to integrate into this open source world. So all of those features are important today. Longer the days where customers just look for basic functionality to provision a storage volume to their Kubernetes pods. When you talk about Cube CTL, it's, and as everyone I expect will know here, just but maybe there's some people who don't, it's the command line too that lets you control Kubernetes clusters. When you're looking at configuration issues with that, what are some of the things you really need to be aware of when you're working in this environment? Especially when you're thinking about persistent data and the importance of it. This seems to be kind of a major concern. I'm curious about how Tanzu approaches that because I know you all are pretty invested in Tanzu. Tanzu is definitely an area that we are integrated a lot into in the context of VMware Tanzu Kubernetes grid. And the way that our storage arrays are integrating into Tanzu is VMware provide a container storage interface in that case. We provide the VVOLS, a VASA interface to connect our volumes into VMware. And the beauty with VVOLS is that you don't need to carve up the volumes in advance as opposed to the old days of VMFest. Everything is being generated by demand. And because VVOLS also have this communicator called SPBM, which is short for Storage Policy Based Management, it can actually dictate policies like the size of the volume, the performance of the volumes, QoS for the volumes, should the volume be replicated, how many snapshots you want to take of these volumes. All of those things are what we call the fast food menu restaurant if you will, which means that you need to send them once and then the developers can just consume them just like a menu on their favorite pancake restaurant for that matter. Cheryl, do you have a favorite pancake restaurant? I guess yours is, I guess you don't need a favorite pancake restaurant because you've got the chef right in your home there. But I'm curious on your thoughts on this about the configuration issue because this comes up a lot. So coming up for, you know, in my memory it came up in San Diego in, you know, a bit when we were there for KubeCon North America and our analyst, John Akeram MSP, described it as there's this funny role the developers have about configurations and they are now being given a lot more responsibility than they ever had before. I mean, my own background is as an application developer. I was at Google writing backend features for Google Maps and deploying it using Borg, which was the kind of internal predecessor to Kubernetes. So I as a developer had a really nice experience out of this. I never had to worry about provisioning storage. I never had to worry about replication or failover or quality of service or any of the things that it's just mentioned because they were all handled for me by the system. I could assume that I had just perfect reliable storage that was infinitely big. And that is a really nice experience as an application developer, where I don't have to worry about any of that. And actually, I would love to ask you to tell me a little bit about what happens when the storage fails. It's a great question and the answer should really depends on the functionality of the CSI or upper level orchestrator that are being used. In itself CSI is not really aware of the storage failure apart from the fact that the host in a pod went down. And so what we did apart from integrating into the CSI basic functionality is to come up with our own a side card that sits on the top or should I say next to the CSI functionality. We call it CSM a great marketing name container storage management which basically inject our specific storage smartness if you will, the intellectual property of it into CSI. So now CSI is becoming extensible and it's far more aware of a storage controller that went down or maybe a cable fiber cable went down and everything in between. So we inject what we think is the specific storage awareness to those failure condition that you just mentioned. That's one way to do it. Of course there are others. If you're using upper level orchestrators like Red Hat OpenShift it has its own HA monitoring capabilities VMware TKG has their own HA capabilities monitoring capabilities. So what we try to do as a storage company in this particular case is to not reinvent the wheel into orchestrators that already have those insightfulness, but rather inject the new things that we think those upper level orchestrators are not familiar with. Yeah, I would just add one more thing Alex on the Tanzu as you asked the question earlier. So the way the VMware Tanzu words is that VMware has created a Kubernetes as a service platform. So it's basically IaaS Plus. So IaaS earlier was based on virtual machines basically or VMs and now the IaaS has gone up one level where it is based on containers. So containers on demand through developers being able to spin the clusters on demand. They have a concept of these guest clusters that developers can go in and they can spin up on demand but IT still maintains control. So all the things that it was mentioning so that's all part of what we call services that we provide developers. So developers don't even know about for example the data protection aspects, some of the storage aspects because VMware sort of abstracts that out and you also are ITs able to making the control of how much CPU how much memory, other security policies, storage policies the data protection aspects. All of that is being controlled by IT but developers have the self service capabilities and that's how Tanzu looks at it and we integrate directly with the services layer which is a supervisor layer we have common services we offer developers. So from most aspect as Isaac was mentioning earlier the developers are totally abstracted away from it. I mean developer did just basically working with their native Q-Cuddle tools and they are not even aware that these insurance, all these very strong services that are provided is there already. Another thing I would like to add that I think is important here when I first started talking about container years ago in our own company conferences the one question that kept repeating itself was what about quota management or in the storage world if I'm a storage admin it's my job on the line. How can I ensure that my developers are not consuming the entirety of the storage array because it's so easy to create those volumes these days. So really what they were asking about is quota management and quota management is being handled by us if using Kubernetes or Reddit OpenShift via our CSM module using our two CSI that basically enable quota management you are the storage admin you're going to set quota management for your Kubernetes pod and it's just going to work and if you're going to consume more than what you're allowed to this policy is going to block you from consuming this capacity using VMware VMware as their own SPBM storage policy based management which is the enforcer to these rules so you cannot overrule them on consuming the volumes that you are not entitled to whether it's performance or capacity or the other attributes like replication snapshots and things of that nature so really if I'm trying to sum it up it's all about bringing together the IT governance with developers that are just going to consume storage without needing to worry about consuming the entirety of the storage array for that matter. So I want to ask a quick question I know that Dell also supports OpenShift how do you find the differences in the architectures that you're seeing from various vendors and how do you adapt to that at Dell? So Tenzu as I was mentioning it's basically creating a ground up architecture to create a Kubernetes as a service and similarly we have other Kubernetes as a service in the cloud like EKS, GKE and each one is managing their own distribution and capabilities so that's one area the other aspect we look at we have had the concept of pass even before Kubernetes came along and then now all these past platforms are basically working with Kubernetes and they're basically making it I would say you know so OpenShift is sort of a pass solution so the difference here is OpenShift focuses more on developer automation basically you take a source code you basically point to the source code you compile the code create an image, deploy it you know all the entire CIS, CD aspect observability all the other aspects that developers need and then it runs on a particular IS platform which can be VMware it can be its own OCS there are many different flavors that OpenShift has and versus VMware is looking at a formal ground up VMware does have a pass solution as well I just would like to call out that as well now it's called TKGI but I believe that they are also in the process of creating a a pass platform about Tanzu right now Tanzu is focusing more on the IS so that would be the major differences but in general the the problems are still the same enterprises are looking at Kubernetes and a faster way to deploy Kubernetes with VMware the idea is that we have a huge number of customers who are already familiar with VMware and all the tools and IT admins don't need to learn a new tool and plus it's very very easy to spin up clusters all of those advantages that VMware brings up from a ground up that a lot of the customers are looking at you know for an on-prem Kubernetes as a service sure I'd like to conclude with you and perhaps we could just finish up with speaking to your end users out there and you work a lot with end users and what are the lessons they're learning about their culture about how they can transform their culture so they can have that developer speed so they can have the capability to use persistent data and work efficiently with storage administrators with their IT teams with the architects out there the DevOps teams what are some of the things you hear you're right Alex so I work with the CNTF end user community which is the largest end user community of any open source foundation or standards body so I do get to see a lot of different companies big and small and the challenges that they face every day and one of the things that I'm excited by and I put in my list of 2021 10 predictions is the rise of GitOps so GitOps is declaring your infrastructure within YAML and checking it into your repository and then having software agents monitor that and keep it up to date so I'm very interested to see how this GitOps trend is going to play out with persistent storage because one of the benefits of GitOps is the ability to roll back easily so what does this mean for persistent storage which is not so easy to roll back that's something I'm definitely going to keep an eye on well yay I think this has been a great discussion thank you very much Cheryl thank you for that finishing point there I want to just encourage everyone to ask your questions now this is a real good opportunity to to answer some of your questions to what you have about integrating with persistent data and what it requires so please look forward to your questions thanks again to everyone you did a great job Cheryl, Itzik, Nivas you're awesome let's have some pancakes