 All right. Wow. A lot of people here. So, I'm between you and lunch, so hope to make this very interesting. And when I was coming to the stage, I was actually debating if I should bring paper or not. And I would go paperless to talk about serverless and bring up the files like now. Well, I'll bring paper and keep it very low tech, because it works. So, I'd like to start by introducing our panelists. And of course, thank you for joining us. But actually, before I start, before the keynote here, when Chris talked about Knative, how many of you here have heard about Knative before today? Wow. Okay. How many of you are running Knative? Okay. What about Istio? Okay. I was asking about production, like you're running it, but we'll dive more into that. So, let's start by introducing our panelists. So, please go ahead. Okay. So, my name is Klaus Deißner. I work for SAP. And yeah, I'm an architect there and working on our functions as a service offering. And also, about a year ago, I think, I joined the serverless working group and CNCF. So, we are doing cloud events. I'm Paul Mori. I'm an engineer who frequently puts his mouth too close to the microphone. But I work on different parts of the Kubernetes ecosystem at Red Hat. And I'm leading our Knative efforts at Red Hat. Hi. So, my name is Edith Levine. I'm the founder and the CEO of Company Console.io. It's a small company we just launched today. So, it's a special day for us. So, we're playing a lot of trying to bring everything that's related to serverless and microservices to get it to your legacy application. So, kind of like glue them together. So, yeah, that's what we're doing. So, we're working a lot with serverless, in the cloud, Knative, everything that there is. Doug Davis, IBM, CTO, developer advocacy for containers and also the lead liaison between the open source communities and IBM. Also, the co-chair of the serverless working group and the cloud events working group, which those two guys over there are a part of. Mark Svarni, I'm a technical program manager at Google, working on serverless, working also on a Knative project. Oh, right. Great. So, I'll start by asking, I would say, the most important question, I think, which is, so, Kubernetes, Istio and Knative or Kik. Is it the new cloud stack? Or is it a yak? And we are just yet another cloud stack and we are here just shaving yaks again. So, let's start with that first. Got the mic. I'm going to start with you for now. Well, I think if anybody knew the answer to that question, we probably wouldn't necessarily need to have the panel, right, for one reason or another. I tend to think of these things like the great adage, if you build it, they will come. So, for example, back when the concept of easily extending Kubernetes with new APIs was like a glimmer in a couple of folks' eyes, I don't think anybody could anticipate the extent to which custom resource definitions and all of the attached concepts like operators, Kubernetes native applications, et cetera, would affect our ecosystem and community in the way that they have. So, we won't know unless we continue to build it. And since we will build it, we want it to be the best thing that it can be. But I personally think there's a huge amount of potential to affect things in a very significant way. Just my two cents. I can jump in here. I think we also have a lot of signal already from the community with the adoption of Kubernetes itself. I saw some numbers today that we're growing to something to the tune of 250 to 300 percent year-over-year on the amount of jobs related to Kubernetes that are advertised or people looking for somebody with those skills. And the role that Istio and Knative play kind of on top of that platform underscores kind of the signals that we're getting from community that people are already invested in Kubernetes and want to have the new usage patterns like Serverless being addressed on that platform. So, using Knative and Istio to kind of fill the gaps around Kubernetes and solve for those problems. So, we're talking about stack, right? So, let's kind of separate the stack. The first one is Kubernetes. I think this is ends down. There is a winner, right? It's Kubernetes. We all agree about it. There is no question about that anymore. So, now we're talking about Istio. And Istio is interesting. So, what we do know is that ServiceMesh is here to stay. That's something that we definitely know. Is Istio will be the winner? It's a good question. Right now, there is a lot of people, you know, a lot of new services all around there, right? UpMesh was from AWS last week. So, if you're running on AWS, it's free. You're most likely will want to use that, right? So, you will not be going to use Istio. There is stuff like Console Connect in on-prem that could be interesting. So, I think there is no definition. This is, for instance, why we actually created Superglue recently, which is exactly that. It's an abstraction layer for all the ServiceMesh. Okay. So, that's that. Now, we're talking about Knative. So, that's even more interesting. Because Istio even makes sense on-prem. That's a good question, right? I mean, it definitely makes sense on the cloud. And the reason it is is because, you know, it's worked much better on when you're renting stuff. You're paying for only what you're using. The question is if it makes sense on-prem. And that's a good question. But what I do like about ServiceMesh and Knative is the packaging, right? It's the past experience. And I think that's here to stay. So, we will see. Nice. Nice. Yes. Want to add to that, Doug? No, actually, she stole my thunder. I agree. I think Kubernetes is the de facto sort of standard everybody's using. That's great until the next big thing comes along. Istio is interesting, but it's more for the ops guide. You get to understand and wrap their head around. It's just, it's more complicated. How does it interact with the routing that's native inside Kubernetes already? But Knative is where I'm more excited by because that hits the end user. And that's where I'm wanting, want the end users to actually live, not with the rest of the stuff. And so, I think you had the two extremes of the most interest to me. The core infrastructure in which we're building on Kubernetes and then the user experience, which is Knative. Istio, cool technology and great functionality, but I'm more interested in the O2Ns. Nice. Add to that, Klaus? Yeah, so we had it in the previous talks. Application development teams don't want to care too much about servers. So, that was already what it is all about. And we are also running our applications on different cloud providers. And so, it's really convenient that Kubernetes is offered by all of them. Yeah, so dive more into that then. We talked a lot about Istio and service meshes, but why do we need service meshes for serverless? Or why is it important to have a service mesh when I'm doing serverless? How does that help serverless workloads? So, there's three main use cases today for serverless mesh. The first one is observability. I guess that's very clear why we need it for serverless. The other one is routing. Again, we probably need to route to a function, specifically if we want to run between microservice and serverless or something like that. It's kind of like a true organic environment. And the last one is security. Again, I think it's no brainer why we need that. So, for most of that reason, I mean, it just makes a lot of sense. Okay. Want to add to that, Mark? I was going to say to bring it, maybe, make it a little more real because I think those three categories are exactly spot on. But, you know, if anybody has ever written more than, you know, five to ten microservices, you quickly realize that the problems you've had with two or three was probably not something you were worried about. Suddenly, you have things that are connecting to each other and getting calls from outside, logging to places. So, all those things we talked about observability, control access or being able to kind of define the paths and patterns is critical. So, I agree, to a large degree, not the end user developer kind of experience that we want to expose in the core, but it serves a very critical and important role in the entire serverless stack. So, yeah, with environment, developers do not need to care about servers so much. Maybe they shouldn't care too much about failing servers. So, resilience is also an important aspect of you, for example, provide a function as a service. With Istio, developers do not have to bag that into their applications, but it can be provided also by the infrastructure. Yeah, for me, it's all about the abstraction. All the categories you laid out are exactly right. And people have been doing that for a long time, but the problem is they had to do it themselves. If we can do that for them and then expose it in a really cool way through something like Knative, so they don't have to see all the guts and glory unless they really need to, that's where I think the end goal is. Have the service mesh but expose in a user-friendly way. Yeah, this is actually, I think we're touching on one of the most salient parts of the value proposition of Knative, which is that the end goal is that you get to focus on the things that are important to you and focus on your value add without having to become an expert in the underlying technologies, necessarily. So, for me, I will just full disclosure, I'm not an Istio expert, but I've been able to get Knative to do things for me with Istio, and in doing so, I actually didn't have to learn a whole lot about Istio and was able to leverage it. That's the kind of like force multiplication effect that I think is really key to the value that Knative provides, that you can stand on the shoulders of people who I can't speak for anybody else but are definitely smarter than me that have figured out how to make these things work together and harness it without doing the work yourself. Okay, so changing topics a little bit, talking maybe about something a little bit more polemic maybe, but vendor lock-in for serverless in particular, should we be concerned, like is that a real thing that we should be concerned about locking in serverless workloads that function as a service? Okay, so let's see. So I think that usually the computer is not really interesting, right? Why are you running a function? It doesn't matter, let's run it somewhere else. So that's not really a problem. A real problem is the data, and the data is there, right? There's nothing we can do. So that's the real problem that we need to solve in terms of vendor locking. The only thing that I will say is that with the new buzzword, multi-cloud, but everybody talking about it, I think that could help, because theoretically what it could cause is that, let's say that all my storage right now is in AWS, the only thing that I need is to run one little function whenever I need this data, but all the rest of my workload can run all over the cloud, you know, wherever I need it. And I think that could be interesting, for instance, I think that could be very, very interesting. Okay. So obviously being part of the cloud and working group, all these standards and cloud events is important to us. From an interoperability perspective, obviously everybody wants interoperability, portability, all that stuff, but I think you have to put it in context, right? It's not a showstopper for most people. It's a pain in the butt, it's a hindrance to port your stuff over to their platform, but most people aren't going to be moving their workloads to whatever platforms every other day, right? It's not the kind of thing you do, because the function signatures or whatever are just one part of the entire equation. It's the functionalities, the performance, everything else is good. There are lots of other things to lock you in, and interoperability and portability is one of them, so you just need to put it in the context with everything else. So interoperability is important, but do it at the right time, at the right scale, and at the right spots. You don't want to do everything, because then you're going to be locked down too much too quickly, but I think we'll get there eventually and customers will demand it, but we've got to do it at the right place and time. Nobody said it, so I'm going to say it. I think developers, when they write applications, don't strive for writing for the minimum amount of people, right? Everybody, whoever wrote an application, wants to write for the largest amount audience that is possible. So the utopia of a portable workload that just magically works on an every single cloud is something that we strive for. Obviously, it's not reality today. So I think it's probably important to break that kind of fear of locking into smaller chunks and kind of understand what does it mean with regards to the actual runtime definition and enabling at least the application to be portable. What does it mean with regards to a control plane, where the way we interact with the APIs and the runtime application? So in principle, locking probably not something we strive for. Nobody wants that, but I think the problem is probably more nuanced than just saying it's either bad or good. Maybe just adding to what Doug said and also you. So I like the point about the data, where the data is, but at least with the eventing, you can be notified when something happened to your data and act upon it somewhere else. Right, so diving more into that because we are talking a lot about hybrid cloud and hybrid serverless now. And specifically about serverless, I would say in function as a service, we're starting to see more and more moving to the edge on-prem on multi-cloud. So hybrid serverless, like is that real or are we coming up with this yet another buzzword? Like is that a real need that you already see in the market or from customers to do hybrid serverless? And can you talk more about that? I have a strong gut feeling that hybrid serverless is a real thing. So I mean in going back to the last beat of this panel, Doug, I think you made the point that like there is, that lock-in is only a problem when you want to move, right? And there, I think there are a cluster of use cases that speak to that, but then there are also reasons that people seek diversity in their cloud platforms. So for example, you wouldn't really want to have single cloud providers, low level bug, knock your entire production stack out. Not that that would ever happen, but theoretically, right? So there are for folks that want to do that kind of planning and risk mitigation, there's a strong use case for distributing functions or any other Kubernetes workload to multiple clusters that may run in different cloud providers. As just one example of a dimension in which hybrid serverless could be a real thing. Okay. If we define serverless as the user or developer experience which allows the developer not to have to be concerned about underlining infrastructure or not have to worry about the scaling of the infrastructure, then hybrid serverless would demand that there is somebody who is able to provide that level of experience. And I think that's where the ability to deliver that kind of developer experience kind of starts getting a little more complex because you have to have that same level of network capability of storage, elasticity, or compute on demand, as well as number of other orchestration tools that are delivering the very same experience. Fortunately, with projects like Kubernetes, like Istio, like Knative, that surface is becoming much smoother and starts kind of hide a lot of that underlining differentiation between what you can deliver on premise versus in the cloud. It's still not going to go to the store and I'm going to buy you a server to bring it into your data center. We're still working on that one, but to some degree, the developer would potentially be able to achieve that same level of experience. So I think it's very much possible. It's going to depend on our ability to deliver those core services on-prem and a cloud and hopefully enable that the portability of workloads that won't force the developer to be concerned about whether they deploy on-prem or in a cloud. Yeah, so I will say that no, and I will explain what I mean. Like, I don't know how many people are using serverless right now in production here. No, that's exactly my point, right? The people actually, you know, we're talking about AI, but people are not even using serverless. They're going to use serverless most likely in the cloud, which probably will be AWS, the first one, because they are the most mature, and that's what it will be for long, long, long time until we need to take care of the things. So in my opinion, it's just like, to be fair, we're going really, really far. Interesting, interesting. Okay, but even with the multiple runtimes and multiple ways you can run those applications, you don't think there is this orchestration? So I would tell you what I think. Maybe it will happen way, way, way on the back, but what you need to know right now is this. The reason people will go to your AWS is because it's really easy. Lambda is easy. You know what? It's integrated seamlessly with every services that you have right now. You can see one place of all the logs and all the observability. I don't think we even close to something like that. And I think not talking about right now performance of Lambda, which will be interesting, or serverless, because you need to make sure that it will, you know, will act in a very fashion, you know, quick fashion. So I will argue that we're not even close to that. And I will say that if someone is using serverless today, and I know quite a lot of people who are doing it in production, they're using it in AWS most likely, maybe in Google, but they're mainly, mainly using it because they're actually really, it's just easy to spin up an application. And it's tied nicely. And in my opinion, we're not even close to this. Interesting. Kind of expending on that, because you mentioned application, and I think Marc also talked about serverless, and again, more in a more broad sense, but quite often, at least when I'm talking with customers, people get confused about function as a service and serverless. Maybe can the panel here help us clarify that and talk a little bit about serverless and function as a service, the differences and should we be concerned about actually breaking that difference, right? Yeah, so from my point of view, I really want to say it doesn't matter, because I don't care about the buzzwords. I tell people, yeah, fine, talk about serverless, talk about functions, so you understand the overall concepts in general, but really when you go to look at a piece of technology for hosting your application, look at the functionality, right? If it makes sense to break it up and tie little pieces easily scale functions, maybe a way to go. Are you going to do your entire application to the function as of right now? Probably not, right? Do you want something that scales down to zero? Well, then you're starting to gain to serverless, right? But then if someone says, oh, yeah, I do serverless because I scale down to zero, well, okay, that's fine, but what's the cost for scaling from zero to one, right? If the cost is 10 minutes, you don't want them to scale down to zero, right? So you got to look at the complete picture in terms of what they're actually offering you and get away from the buzzwords, but in terms of, but since you asked, I tend to think of function of service is more breaking up the application of smaller consumer pieces that are easily scalable, event-driven, typically given the source code, they'll host it for you, and then serverless is where you add more of the operational aspect, which is scale down to zero, zero cost and stuff like that, but they are so closely related, I try not to get hung up on the, on the buzzwords between the difference between those two buzzwords. Mark once, Jim. Yeah, definitely something close to my heart. I mean, to a large degree, if we kind of talk about serverless as this developer experience, like I was talking before about hiding the underlying infrastructure, ability to only pay for what you actually use, those are properties that are not unique to compute a function or application, those are properties that you want to potentially have a new ML platform or in your query or SQL platform. So to a large degree, my strong feeling is that this is not a compute-only problem. It's much bigger than function, it's much bigger than compute and applies to a lot of different disciplines within a technology. Within function itself, I think we have some established notion of what that may particularly mean because of dominance of one particular player in that space, but I think developers are getting smarter every day and they expect that same experience, the same kind of capabilities from multiple different platforms, including now Knative that hopefully we can live up to. I think it's important to differentiate just in terms of expectation management where, for example, if you came to Knative and you were looking for an experience similar to Lambda, you wouldn't get that yet. And that's really where I kind of draw the line in terms of I think of the function as a service as being an experiential thing primarily, whereas serverless is more technology patterns in my own mind, especially focused on software delivery. And serverless is a necessary but insufficient ingredient to get to FAS, where you get that like ability to really focus very, very clearly on just my own business logic and forget about the rest of the things that go into delivering it into production, rolling across versions of it, etc. I think one reason that it began with functions as a service was the pure technical one. Those simple, stateless pieces of logic was easy for a framework to hook in and do the auto scaling. Long term, or I think it's already beginning, those serverless qualities like paper use and then auto scaling will be adopted by all other platforms as well. What maybe remains for functions is the special programming model, like stateless event driven and lightweight, which may be suitable for certain use cases. Interesting, no? That makes sense, yeah. And explain it a bit more than on Knative, the constructs are the modules we have in Knative. Is it just for serverless? Because, for example, build is a clear one that it's super useful outside of the serverless context as well. I mean, we have been doing that for a while with OpenShift to build, source to image, but how do you see the users of parts of Knative outside of the serverless context? Or is that even possible? I'll take a stab at this quickly. I think when we first start looking into Knative itself and we start identifying some of the properties that we would want the platform to have, we quickly realize that those are not something that developers would say, oh, I want for functions, but I don't want for applications. Do I want my applications to spin up from very fast? Yes, I would want that. Would I want my application to scale horizontally based on the amount of requests that it receives? Yeah, I want that for functions, I want that for applications. So, to a large degree, I don't see the differentiation with regards to the type of scope of the application, whether it's just one function or a whole application, but I don't think that's unique to that. Yeah, I totally agree, which is why I'm actually very excited by Knative, because I think, and it actually relates to the previous question of what's there in between fast and serverless, it's like, I don't care. I just want to host my application and all the properties of serverless or functions, they all apply to every single application. I don't want to care about networking. I don't want to have to care about the routing stuff. I don't want to care about auto scaling. Just do it all for me. And if Knative helps get us there with a great user experience so that I don't have to think about it, that's my goal there. And I don't care what you call it. Just give me the functionality I want to get my job done faster. Yeah, anything to that? Yeah, okay. I mean, just say that. I mean, I think that it's even, if I remember correctly, it's actually been told, but actually Knative packaging is worst part or something from Cloud Foundry or was related or was donated by Cloud Foundry. Build packs. Exactly. So I mean, basically, if you think about what's going on right now with Knative, it's a very, I mean, there is a lot of similarity of what happened before that we'd pass, right? With Cloud Foundry, for instance, as an example. So it's not that different. So of course it can be also for container, right? Because I mean, why not? That's what we're doing all this time. So I think that definitely all those things that anyway, I think it was bored for the first place from there can be used. And I think, yeah, I mean, if you think about it, as you said, like Knative should be used for serverless and front fork. Okay. I'm surprised that no one has mentioned eventing as the standout, like stand alone. Yep. Standout thing that is not essentially serverless, right? Like the consumption of events and routing events to functions definitely as part of. Why? Sorry, what's that? Well, but why? Why can't it be for microservices? I'm trying in my very long winded way to say that it can be, right? In the sense that eventing in particular is not new, right? And what community has done there since version 01 to 02, it's just phenomenal. So I'm looking forward to kind of seeing that evolve. To a large degree, that's going to drive the future of the platform because that's what developers are looking for. So one of the things that I'm especially curious to see is I have two components. One is the serverless and FAS, and I'm really looking forward to seeing in the next year FAS offerings that are based upon Knative because that's the last 20% to get to the experiential things that you would expect from a Lambda, for example. Getting there is an interesting journey. The Kubernetes nerd in me is also extremely interested to see how we can make progress in Kubernetes. And for example, we talked about scaling performance, right? We've already had cause to find at least one thing that I can't remember the pull request number now to Kubernetes, but I don't think anybody really cares to increase the performance of scaling up from zero. That's one dimension that I think will be really interesting to see if we can drive it performance improvements back into Kubernetes during our work on Knative. And then there's another component where going back to my pet area of interest or sub area of interest in eventing where one of the novel things about Knative eventing is that you define your own event source, right? And that gets you back into the kind of like cloud native application facet of life in Kubernetes where I'm expecting to see a lot of event sources get produced and managing them and making them work together and just giving the Kubernetes and Knative communities time to digest exactly how do we slice and dice these APIs for event sources seems like it might have some really fruitful outcomes there. So speaking of eventing, today already this cluster federation was the topic sometimes. I think that could also be quite interesting for Knative eventing because events usually don't start at the edge of a cluster and do not end there. So you usually want to look at that event flow maybe across clusters or also across or with regard to outside sources. And that could be interesting to combine cluster federation and eventing. It's funny you mentioned that since I lead a team at Red Hat that works on Kubernetes federation. During my break time this year I'm really looking forward over the holidays to getting federation to deploy Knative resources which is I can just pitch you now. I will be able to do this without writing any code and I'm really looking forward to messing around with different Knative services. I really think they want to go to lunch. Here's what I'm thinking, right? Why container actually catch? It's catch because the benefit of actually using container was huge versus using legacy application. So first of all in order to serverless to become something in my opinion we need to see a huge benefit of someone to use that. I don't know that it's there yet. That's number one. Number two. Okay so you want to use serverless. So you want to use microservices but what about the other stuff that you're using already? So people using legacy. People want to use microservices. Now there is serverless. And in my opinion the problem is that it shouldn't be or it should be end. And this is by the way I will just preach this is what my company is doing so look at our product glue. But that's exactly the purpose. What if we can take the legacy application and actually extended it to microservice and serverless and add a functionality there which makes sense. Use the right architecture for the right problem to solve. And that's why I will say so the future I don't think that it's matter if it's K-80 or something else. I think what is matter is that for those people to use it and this is what I really hoping that will happen. We need to make it easy for them and take it all the way to the legacy application. And that's what we're hoping to do. I think less comment from Mark. I mean I think that's underscores the kind of the right way of thinking about it. I'm hoping that next year or two years from now when we meet at a conference like this services just assume us de facto use usage patterns. You as a developer really should not have to worry about a lot of those concerns. I know we would like to deliver a already delivering set of serverless functions but to some degree we want you to come to a platform as a developer and not have to make a choice for us the number one step. Am I going to be building a function or am I going to be building application. I should be just able to write my application and expect a lot of the things that we're looking from serverless to be just there by default. This is exactly why we are so interested in K-native because we saw that again. I don't want to choose. I can do functions, applications, microservices. I can have my integrations with event sources legacy code or new event sources. That's why I think we're so interested in that project. I'd like to thank the panel to join us. Thank you all for staying with us and I think that's it. Thank you.