 The Cube presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the CloudNative Computing Foundation and its ecosystem partners. Welcome to Valencia, Spain. In KubeCon and CloudNativeCon Europe 2022, I'm Keith Townsend. My co-host, Paul Gillan, Senior Editor, Enterprise Architecture for SiliconANGLE. We're going to talk, or continue to talk to amazing people. The coverage has been amazing, but also the city of Valencia is beautiful. I have to eat a little crow. I landed and I saw the convention center. Paul, have you got out and explored the city at all? Absolutely. My first reaction to Valencia when we were out in this industrial section was, this looks like Cincinnati. Yes. But then I got on the bus there a second day here, 10 minutes to downtown, another world. It's almost a Middle Ages flavor down there with these little winding streets and just absolutely gorgeous city. Beautiful city. I compared it to Charlotte. No disrespect to Charlotte, but this is an amazing city. Nainasian, Principal Product Manager at Red Hat and Roland Russ, also Principal Product Manager at Red Hat. We're going to talk a little serverless. I'm going to get this right off the bat. People get kind of feisty when we call things like K-native serverless. What's the difference between something like a Lambda and K-native? So I'll start. Lambda is like a function as a service, right? Which is one of the definitions of serverless. Serverless is a deployment platform now. When we introduce serverless to containers through K-native, that's when the serverless got revolutionized. It democratized serverless. Lambda was proprietary based. You write small snippets of code, run for short duration of time, on demand and done. And then came K-native, which brought serverless to containers where all those benefits of easy, practical, event-driven running on demand, going up and down, all those came to containers. So that's where K-native comes into picture. Yeah, I would also say that K-native is based on containers from the very beginning. And so it really allows you to run arbitrary workloads in your container, whereas with Lambda you have only a limited set of languages that you can use. And you have a runtime contract there, which is much easier with K-native to run your applications. For example, if it's coming in a language that is not supported by Lambda. And of course, the major, the most important benefit of K-native is that it's run on top of Kubernetes, which means, which allows you to run your serverless platform on any part, any other Kubernetes installation. So I think this is one of the biggest things. I think we saw about three years ago there was a burst of interest around serverless computing and really some very compelling cost arguments for using it. And then it seemed to die down. We haven't heard a lot about serverless. And maybe I'm just not listening to the right people, but what is it going to take for serverless to kind of break out and achieve its potential? Yeah, I would say that really the big advantage of course of K-native in that case is that you can scale down to zero. I think this is one of the big things that will really bring more people on to board because you really save a lot of money with that if your applications are not running when they are not used. And yeah, it takes really, yeah, I think also that the, because you don't have this vendor login part thing. When people realize that there are more, that you can run it really on every Kubernetes platform, then I think that the rise of server, the journey of serverless will continue. And I will add that the event-driven applications, there hasn't been enough buzz around them yet. There is, but serverless is going to bring a new leash and life on them, right? The other thing is the ease of use for developers. With K-native we are introducing a new programming model, the functions where you don't even have to create containers. It would create containers for you. So you create the services but not the containers? Right now you create the containers and then you deploy them in a serverless fashion using K-native. But the container creation was on the developers. And functions is going to be the third component of K-native that we are developing upstream and Rad had donated that project. It's going to be where code to cloud capability. So you bring your code and everything else will be taken care of. So I call a function or it's funny, we're kind of circular with this. What used to be I write a function and put it into a container, this service will provide that function. I just call that function as if I'm developing kind of a local, my local, but local effort. So if there's a repetitive thing that the community wants to do, you'll provide that as a predefined function or as a service. Yeah, exactly. So functions really helps the developer to bring their code into the container. So it's really kind of a new abstraction over top of K-native. And of course it's also more opinionated approach. It's really more closer coming to Lambda now because this also comes with a programming model which means that you have certain signature that you have to implement and other stuff. But you can also create your own templates but because at the end it's always what matters is that you have a container at the end that you can run on K-native. What kind of applications is serverless really the ideal platform? Yeah, so of course the ideal application is a HTTP based web application that has no state and that has a very non-uniform traffic shape which means that for example if you have a business where you only have spikes at certain times like maybe for Superbowl or Christmas if you're once selling some merchandise like that then you can scale up from zero very quickly and arbitrary high depending on the load. And this is I think the big benefit over for example Kubernetes or sort of auto-scaler where it's more like indirect measure where you're scaling based on CPU and memory but here it's directly relates one to one to the traffic that is coming in to concurrent requests and yeah, so this helps a lot for non-uniform traffic shapes that I think this is one of the ideal use case. But I think that is one of the most used or defined one but I do believe that you can write almost all applications. There are some of course that would not be the right load but as long as you are handling state through external mechanism. Let's say for example you're using database to save the state or you're using physical volume to mount to save the state. It increases the density of your cluster because when they are running the containers would pop up when your application is not running the container would go down and the resources can be used to run any other application that you want to use, right? So when I'm thinking about Lambda I kind of get the event driven nature of Lambda. I have a S3 bucket and if a S3 event is driven then my functions as a service will start and that's kind of the listening service. How does that work with K-Native or a Kubernetes based thing? Because I don't always have an event driven thing that I can think of that kicks off. Like how can I do that in Kubernetes? So I'll start. So it is exactly the same thing. In K-Nature world it's the container that's going to come up and your services in the container that will do the processing of that same event that you are talking. So that's the notification came from S3 service when the object got dropped. That would trigger an application and in world of Kubernetes K-Native it's the container that's going to come up with the service in it. Do the processing either fire another service or whatever it needs to do. So K-Native is listening for the event and when the event happens then K-Native executes the container. Exactly, so there's the concept of a K-Native source which is kind of my depth to the external world for example for the three bucket and as soon as their event coming in K-Native will wake up that service will transmit this event as a cloud event which is another standard from the CNCF and then when the service is done then service spins down again to zero so that the service is only running when there are events and which is very cost effective and which people really actually like to have this kind of dynamic scaling up from zero to one and even higher like that. Lambda has been sort of synonymous with serverless in the early going here. Is K-Native a competitor to Lambda? Is it complementary? Would you use the two together? Yeah, I would say that Lambda is offering from AWS so it's a cloud service there. K-Native itself is a platform so you can run it in the cloud and there are also cloud offerings like from IBM but you can also run it on premise for example that's the alternative so you can also have hybrid set scenarios where you really can put one part into the cloud the other part on premise and I think there's a big difference that you have a much more flexibility and you can avoid this kind of render look in compared to AWS Lambda. Is K-Native provides specifications and confirmation tests? So you can move from one service to another so from if you are on IBM offering that's using K-Native and if you go to a cloud Google offering that's on K-Native or a Red Hat offering on K-Native it should be seamless because they are both conforming to the same specifications of K-Native but whereas if you are in Lambda there are custom deployments so you are only going to be able to run those workloads only on AWS. So K-Native Kucan, a co-located event is part of Kucan. I'm curious as to the level of effort in the user interaction for deploying K-Native because when I think about Lambda or Cloud Run or one of the other functions as a service there is no back end that I have to worry about and I think this is where some of the debate becomes over server list versus some other definition. What's the level of lifting that needs to be done to deploy K-Native in my Kubernetes environment? Like is this something that comes as a base part of the OpenShift install or do I have to like, you know, I have to. Go ahead, you answer. Okay, so actually for OpenShift it's a code layer product so you have this catalog of operator that you can choose from and OpenShift server list is one part of that so it's really kind of a one-click install where you have also get a default configuration you can flexibly configure it as you like and this is really, yeah it's a, I think we think that's a good use experience and of course you can go to this cloud offerings like Google Cloud one or IBM code engine. They just have everything set up for you and you have also different alternatives you have hand charts, you can install K-Native in different ways. How you want to, you also have options for the back end systems. For example, we mentioned that when an event comes in then there's a broker in the middle of something which just patches all the events to the services and there you can have a different back end system like Kafka or AMQ piece so you can have very production grade messaging system which really are responsible for delivering your events to your service. Now K-Native has reached, I'm sorry, did I interrupt you? No, I was just going to say that K-Native when we talk about, we generally just talk about the server less deployment model, right? And the eventing gets eclipsed in. That eventing which provides this infrastructure for producing and consuming event is inherent part of K-Native, right? So you install K-Native, the eventing also that you install eventing and then you are ready to connect all your disparate systems through events with cloud events. That's the specification we use for consistent and portable events, so. K-Native recently admitted to the, or accepted by the Cloud Native Computing Foundation, incubating their congratulations as a big step. Thank you. How does that change the outlook for K-Native adoption? So we have got a lot of support now from the CNCNF, which is really great. So we could be part of this conference, for example, which was not so easy before that. And we see really a lot of interest and we also heard before the move that many contributors were not started into looking into K-Native because of this kind of non-being part of a neutral foundation. But so they were kind of afraid that the project would go away anytime like that. And we see the adoption really increases but slowly at the moment. So we are still in ramping up there and we really hope for more contributors and yeah. That's very real. Like CNCNF is almost synonymous with open source and trust, right? So being in CNCNF and then having this first K-Native con event as part of KubeCon, we are hoping, and it's a recent addition to CNCNF as well, right? So we are hoping that this events and these interviews, this will catapult more interest into serverless. So I'm really, really hopeful and I only see positive from here on out for K-Native. Well, I can sense the excitement K-Native con sold out that congratulations on that. Thank you. I can talk about serverless all day. It's a topic that I really love. It's a fascinating way to build applications and manage applications, but we have a lot more coverage to do today on the queue from Spain. From Valencia, Spain, I'm Keith Townsend along with Paul Gillan, and you're watching the queue, the leader in high tech coverage.