 Good to see you all here. I think I'll start with have you all, are you anyone using Knative? Can we show hands? You're not using, but you have heard of Knative? OK. So, well, this is maintainers' track, so we would be talking about what Knative project has been up to, and we'll also talk a little bit about what Knative project is. And I'm on stage with my esteemed colleague here. I'm Naina Singh. I work for Red Hat. Paul Schweigert, IBM. Dave Protisowski from VMI. Boran Huss from Red Hat. Maurice Salatino or Salavoy, and I work for a company called Diagon. OK, so let's start with what and why Knative? So the short answer is, it makes Kubernetes simple. And the long answer is, it is an open source platform that enables developers to create, build, deploy, and manage workload over Kubernetes through an abstraction layer. And it can do that in serverless fashion. It's just icing on cake. So now, our next question is how? So with Knative functions, well, let me start back. Knative comprises of independent components that all work together really well. So with Knative function, it lets you create a function in two steps. And deploy it on your cloud or cluster. With Knative serving, you can manage your applications that can scale up and down based on demand. It also provides you traffic splitting. That means you can test some particular new feature without disrupting your users. And with Knative eventing, it allows you to create declarative event-driven apps that can respond in real time. So to sum it up, Knative provides developers, or could we go to tool choice for developers to create, build, manage, workload of container-based applications? So I gave a glimpse of what the independent components of Knative are. But here they are. They're serving, eventing, and client. And we have a CLI that ties it all together. And we are going to go a little bit more into what these components can do. But serving is, I call them, that auto-magic HTTP services. You give us a container in one step. We give you the URL. Knative eventing is basically a whole gamut of tools or infrastructure, if you will, that allows you to create event-driven apps. And Knative functions that lets you bring your own code. We provide the project scaffolding, the templates. And we even take off the building of the container off of your list and gives you the URL directly. Knative is built on the plugin architecture. So as you can see, what it means is that you don't need to do something specific for Knative. It can work with the stack you have. And for serving, eventing, and functions, it is listing on the slide as to what technology it can work perfectly well with. And with that, I'll give it to my colleague to talk about serving. So serving, we said it's auto-magic HTTP services. Basically, if you're running an HTTP application on Kubernetes, there's a lot of things that you may not want to have to deal with that Knative can do for you. So it makes it easier to run HTTP applications. So for example, we can scale your application based on the number of requests that are coming in. So you don't have to worry about how many replicas you want based on how many you can set a concurrency level, the number of requests that are coming in. We can scale it up to meet that demand. We can do that automatically. If you want to do a rollout based on a percentages of traffic, we have a capability to allow you to do that. So you can give a traffic percentage for the different rollouts that you have. We take point-in-time snapshots of your application. So if you make changes and need to roll back, it's very easy to do that to exactly what it was you don't have to worry about. What did this Docker tag reference, that kind of thing? We do it. We make an exact copy of the configuration and your code that you can roll back to as needed. We add automatic health checks and automatic TLS provisioning. So a lot of tools that you can use to make things easier. And the nice thing about this is you can go from, you know, there's a lot of YAML that's required in deploying an application. You need a deployment. You need a service. You need an Ingress. With Knative, you can cut out all these little pieces. And you go from that behemoth on the, whatever this side is, left, audience left, to what we have on audience right, which is 10 lines, give or take a little bit. So it makes it easy to deploy applications. And with that. Hello. So some recent highlights is, this is stuff I've been kind of personally working on, trying to improve the stability. One of them being like, not dropping requests during upgrades. And then we also have this activator component that is buffering when your revision's scaling up, but it can take some time for things to do that. So then you want to buffer those requests. And when that goes in between the data plane and comes out of the data plane, make sure we don't drop requests when that happens. You also did some expansion for timeouts. So if your scale to zero and then activation fails, then clearly you would like to have a timeout there. And also per revision request timeouts was kind of important to add. Especially for like WebSocket workloads, you would want to set the timeout to be very large. And then also kind of set, for example, like kind of like the idle timeout. One outstanding bug we fixed recently was like with cert manager, there's a limitation on the domain length that you could do. So if you have really long namespace names or really long service names that is now fixed with sort of auto TLS. And something we've been focusing on is some security aspect stuff. So security guard is sort of like an extension that lets you apply some policies to requests that are coming in, aka like don't let reject certain headers, don't allow certain query params, kind of allow the operator to apply certain policies that way. And we kind of also finally took, change the default domain. We're using example.com everywhere. So when you do deploy, it would actually expose your ingress on that domain which doesn't exist. But we decided to take it more secure by default stance and make it cluster local than letting people expose domains after the fact or letting the operator do that. And for the roadmap, there's a link there but to highlight internal encryption is sort of a thing we're working on now. The reason why is cause there's PCI and this compliance is useful. You need to have like the data plan to be encrypted. A big project that I want to work on and it's been a while, we still use open senses but open telemetry is hitting GA for a whole bunch of their specs and libraries. So we want to migrate to that. Gateway API. So like Nina mentioned, we have like plugins for the different networking layers. You can switch from Istio to contour, et cetera. Each of those requires essentially like a special plugin. We want to opt out of that. And we just want to program the networking using the gateway API directly. And with that, I'm spending time in the upstream community there trying to break gaps, get positioning so that K-Native use cases are heard. So people use K-Native and want to use the API. Go look for those gaps and comment please. And finally like just more scale and testing performance. The serving API has been stable for like two, three years. And I just want to do scaling tests and things like that and improve the performance cause there's definitely some things to do. So I'll hand it off. So let's then go to the second pillar of K-Native which is K-Native eventing. And K-Native eventing is all about for creating event-driven applications which means you have get primitives for building up an event mesh. And in the center of this is the broker. As you see there are entities so everything is reflected by CRDs behind the scenes. And of course there's a control plane, a data plane who picks it up. And actually there are two main concepts of one of that of a source where you get pick up events. It's important to note that these events are in the form of cloud events which is also another CNCF standard. So actually these sources that create these cloud events are more like adapters that pick up events from the outside world like event from S3 bucket if there was a file change creates a cloud event and then sends it on to the broker inside. And the sources are connected to the broker and the broker is responsible for dispatching all these events to interested listeners. So you can have a K-Native service for that of course which has the big benefit that the service is not running so it's scaled out to zero if there are no events so it's perfect fit together so that if you use eventing with serving you can by the way use serving and eventing completely separately so there has no dependency between both of them that you can use either serving or eventing or of course both which makes totally sense. Yeah interesting is you have a flexible way how you can register your applications to the broker with so called triggers. The trigger itself can have filters where you just specify which type of events you want listened for. And also interesting is that your service then can return another event as a response so it's all everything is HTTP driven here. So you get an HTTP request with a cloud event your service returns another event which yeah cloud event this is then ingested into the broker again and dispatch to the other one. So you can really build a very flexible mesh with that and of course it doesn't have to be a K-Native service you can use a regular deployment so everything that has kind of a we call it addressable has a URL in this status can be used as a target for such an event. Yeah and then finally there's one interesting part that we're currently working on or kind of reviving the effort so it's about event discovery because you need to know which kind of event types are available in your system so every resources creates events of a certain type and this registry can be used for lookups and so that you can find out how you, where I can register just on this events. So this is kind of a nutshell what the eventing is all about. Now let's move on the recent highlights and also the roadmap. So one of the things that has been landed in a K-Native eventing recently was multi namespace API server sources. So I have to say that there are several sources that came out of the box with K-Native. API server source is one of them. This is just a source which runs constantly and listens to the API server and creates cloud events out of API events like that. And previously there was only a single, what was only be able to listen on events coming from a simple namespace. Now you can use a selector to select multiple namespaces and get that. So this is kind of a new thing. Another thing is the broker. I haven't mentioned really how the broker is implemented. You can by default there is a very reference broker so to say which stores all the events that are played within the memory of a process. So it's probably only suitable for development. But for more production ready systems there are other backends like Kafka or RapidMQ that you can use and plug in and also get more resilience features for your events. And for Kafka there is also now a way to scale up your broker and also your Kafka source where you import Kafka messages from a topic into the broker by Keter. So who of you know Keter already? So okay, some of them, that's nice. Because Keter is also a very interesting project and actually you often get the question whether this is kind of competing with the Keter but actually Keter and Keter are really complementary. So Keter scales on HTTP based load and Keter scales on everything except HTTP. So it scales based on number of messages in a queue for example and other stuff. And here we can use Keter really to scale our infrastructure data plane so that really the broker is only running if there are really also events in the system. So this is very, very nice for cost saving as well. And yeah, so these are kind of the highlights and then there's a roadmap. You'll find by the way, all the roadmaps that we're showing here also on Keter because they are all stored in Keter projects. You can follow those. So we are now focusing on production create features like security, connective TLS support out of the box which means we can already kind of use Istio and another service mesh for creating, for securing transport security. But actually there's also the need for some native support for TLS. We're working on multi-tenancy support which allows you to run multiple tenants that are separated by each other for events. Or IDC, so integration with Istio is also a topic so that you can really nicely play together with a service mesh. Or IDC client author report is used for to authenticate the sender to a broker because at the moment everybody can send to a broker or call this is not optimal. And therefore we need some authentication and authorization feature to understand that we can restrict the people who can send to a broker there. And finally, as I mentioned, event discovery is something which we also continue to work on to make it more flexible and so on. These are the roadmap of the eventing. And now let's move on quickly to next one. So this is kind of a interlude. It's actually a client. This is maybe you don't know the client well. So the client actually is a CLI for connective like cube control. Of course, everything you can do with custom resources, but sometimes it's much easier to have some typed interface or you can interact with your cluster. And their KN is very helpful. So you have this kind of CRUD operation that you can use for services for brokers. You have a flexible plugin support which allows you to extend the functionality of KN. We will see functions in a second which leverage at this plugin support. One special thing which is different from cube control plugins, actually it works nearly the same like cube control plugin. So the external command that you can run, they are found by a naming convention. But you can also inline them. So if your plugins are written in Golang, you can also put them during the compiled process within one binary and we have a builder system where you can select the plugin that you want to have. The benefit of this is that you only have a single binary where all your plugins are already included and so you can easily distribute it. And recently we have also now entries on artifact hub which is kind of a way to discover plugins and you can install plugins either directly by downloading it from our website or using some other installation mechanism. On the roadmap, there's one big thing which is plugin content sharing which means we want to allow that plugin can transport certain information to other plugins. So this is a little bit complex to describe but this is useful for running your plugins and so to make it more smoothly and make the user experience better. Then other user experience improvements are about event discovery. So the way how you find events and also automatic trigger generation when you want to integrate functions so that you can combine them. And finally, secret management is on the table and also improve the export. We already know how to export canative services for GitOps like operations so you can interactively create your service with CLI and then export it with a YAML file. Yeah, so there we are and that's so far for the client. Now let's go to functions. Let's talk a little bit about functions. So both clients and functions are CLIs but in functions we are focusing more on the developer lifecycle and we are focusing on that programming model of creating functions. How many here are like developers? Okay, yeah, okay. This kind of section is more on this client side, right? How do you consume Knative? And functions are created in the mindset of people that already has Knative for example, serving and eventing installed in their clusters. It could be serving, it could be eventing but now we are kind of more on the client side. And what we're trying to create here is a simple flow where we are abstracting away a little bit about like that our functions are going to be running on Kubernetes. So this is for developers also for platform builders that wants to provide like functions as a service experience on top of Kubernetes. So with Knative functions what you do is like you have the funk KN plugin or the funk CLI that you can use to basically create a function from a template. You can create that function from a template that can be in any programming language. So we have templates for different languages. And as soon as you created the function then you can just go and add the function logic in there like the developer will go write the function logic and then using funk deploy they can have that function running in a Kubernetes server with Knative and of course you will get the URL back to interact with that function. It's pretty simple. It's based on templates. The idea here is to avoid developers thinking about exposing ports or creating a web server so people can access their functions. We are just wrapping all that logic up. And it actually we have different templates for dealing with HTTP requests or cloud events. So if you are building more like an event driven system you want your functions to consume cloud events. We have some templates more specialized on that. And the idea here is also like try to again go one step further into removing all the developer tasks like creating Docker files for creating containers for their functions and all that stuff we are trying to hide away behind the funk experience in general. So when you want to build a function and deploy a function the idea here is that the developer will focus on writing code and the funk CLI will build a function container and it will also know how to deploy that function into the configure Kubernetes cluster. The function this is just kind of like how a function, very basic function template looks like. And again, when you do funk create a function you can specify the language that you want to use. So all these kind of like run times and text tags supported. So no, the Spring Boot, Rust, TypeScript, Python, Quarkus, and Go, right? Like this is the Go one. And as you can see here the function signature it's pretty simple. In this case it's one of these functions that it's expecting an event. And in this case it's a cloud event that's why you see the import there. So this function, whenever we call this function this function is going to actually get the cloud event, parse it, and have that event into a Go struct so the developer can do something with that event. And it can also return an event as you can see there. So this is pretty simple. So a little bit about what we are doing it's like the functions are working group. It's pretty active in a way that we are looking into not only how to improve the developer experience but also our target users are platform engineers that are trying to build these platforms. Pick up any functions and build an experience on top of it. So there is a lot of work around finding the right abstractions for platforms and making sure that we are not leaking anything from what is installed in the cluster into the developer experience. For that we are working on something that it's like a new scaffolding mechanism on how do we create these function templates and what kind of capabilities to enable developers to have out of the box and extend. So that's one part, then pipelines as code because we need to build these functions somewhere. If you try to build the functions locally of course for the thing that we have today like the last release, you need to of course have Docker locally so you can actually create a container. The user will not be interacting with Docker directly but the fang CLI will actually use the Docker demon to build the container. But we also have different approaches. So we can, if you use for example we have something that it's called on cluster build that was created by Svinak here that basically allows you to run tecton pipelines in the cluster. So if you have tecton installed in your Kubernetes cluster what you can do is you can say fang deploy and the build step it's going to happen remotely in a cluster which makes a lot of sense again for platforms, right? Like when you want to say, okay I don't want my developers to have Docker installed in their laptop, company policies or maybe because they don't have access to a registry like a public registry or the credentials for that or whatever you can just start using these remote builds kind of like approach. And then there is a project that is called pipelines as code which provides a more flexible way to define these pipelines that can be run remotely. So we are integrating with different technologies to do that. We are also looking pretty active at creating wasn't functions as well. That's not something that it's supported in the latest release, but that's something that we are kind of investigating and trying to include. And I'm also working a little bit on the dapper integration. I work for the dapper project as well. So kind of like making sure that we make our functions like dapper enable which basically means that when you want to access to generic infrastructure like storing data into a database or sending messages or like reading secrets and stuff like that, we go through this other abstraction layer. Again, from the functions perspective that should be completely hidden away from the user if the user should have some kind of like default functions and the function interface to interact with that. So what I wanted to show like in a quick demo today I don't know how much time do I have I have some minutes, okay. So yeah, yep. So I will finish quickly this. So what I wanted to show is like this new scaffolding mechanism that it's being created by Luke from Red Hat which is again like going through the investigation and the analysis of what are the lines like the boundaries for function developers and what kind of things the platform should be provided and what kind of behaviors for our functions. So we are taking kind of like a different approach from the one that we have released today. And this investigation is proven to be like very, you know, very eye-opening and mind-expanding. But yeah, I think that we are actually into the right track. So I will try to show something like that. It's very difficult if you don't know the functions project but I will just try to explain the logic behind the changes. With this new approach, with this new approach, we can actually run functions locally without the need again of Docker. If you want to create a local container for your function and run it using Docker you can also do that. We are looking into what are the, to define like more specifically what are the functions interfaces and what's being provided by the platform. And now we have kind of like this concept of instance-based functions, right? If you see kind of like this interface here, we will be creating kind of like, this is like static methods, right, like on go. So basically every time that the function is being called we are executing this method. But if you have like some initialization code or some shutdown code that you want to run or for example, if you have like, if you need to add like to your functions, liveness and readiness probes that are a little bit more custom, you need to have more control on that. And with our current approach, that's not that easy to achieve. So we are looking into expanding that. And of course, like the thing that I mentioned before from Dapper. So let's go to a short demo that I will try to show here. Let's see, yeah, I think it's going to work. Yeah, perfect. So let's do this. Let me see if I can see my screen. I will go here to my BS code where I created a new function that it's been scaffolded with the new method that we are working on. So this is based on a branch. Can you see the function there? Yeah, so as you can see here we have a little bit of a different function signature in this case. And as you can see, we are already part of a go struct here, like we're part of something. So this is kind of like a way to create like instance based functions that they will have some kind of context that we can use. And because we are now part of a bigger scope, right, like this is not a static handle method, we can start doing some other initialization code or for example, readiness and liveness props. We have before like the approach that we use with the static method approach is like we were wrapping our function code into a web server that was started with a bunch of default endpoints like for example, default liveness and readiness props. But again, if you wanted to extend that base layer that wasn't that easy, you need to provide your own base layer and that gets complicated. With this new approach, let me show you first how does this look from the terminal. Let's see here, let's clean the screen. Let me see if I am in the right place. Yeah, so I'm inside my function. The function is pretty simple. It's just a go project with the func.jambl file that described what the function is doing and the programming language and some other details. And as soon as I have these I can actually do something like func deploy. And this is going to actually build a container. In this case, with this new approach, we are not even using a Docker demon to build a container. We are using a library just that creates that. And by the end of this command, what we should have here is a URL for the function, right? So I went from having a function that it's written locally and it was created by a template to deploy this into a Kubernetes cluster where K&AD is enabled. So I can actually do HTTP and that URL. Just to make sure that it's actually running on my cluster. And there you go, so we can have the request received there. That's the most basic function that you can create. The function that comes out of the box from the template. But again, if you want to start customizing the lifecycle of the function, what you can do is you can start enabling, for example, some of the lifecycle methods that we have here. We have the readiness probes, so this is the function. And again, what I wanted to show here is that ready now, this function that is here, that it's part of my function struct, it's actually providing one of the lifecycle hooks that we provide, right? Where you know that when you deploy something into Kubernetes, you need to actually make sure that the cluster understand that this container is ready and it's actually already working and ready to be used. The same with the alive function here for the liveness probe, right? A more common use case as well is initializing code in the previous version. If you wanted to bootstrap or load some libraries or doing something before calling the function code, you need to do that per every call, but in this case, you have a stat lifecycle method that you can extend and you can plug in and do that. So we are trying to expand, again, like the function interface. Before, we didn't have a very strong interface. We only had the function signature, but now the function interface of the lifecycle methods that you have gives you more control to do more use cases, basically. This was one kind of on the major request. So if I do fun deploy again, I don't know if I saved the file, hopefully yes. But again, I would now should be able to see that the function has this custom liveness probe and readiness probe, which sounds like super simple, but again, when you are building real life applications, you will need to extend that based on the other services that are available, the databases or whatever that your function is consuming. So if I do logs, the function deployment, you can see that, yeah, okay. So it's now calling those custom endpoints in this case. And the same kind of for the starter thing, that's where you do all the startup stuff. I will finish the demo with something pretty simple and this is just something that we are evaluating. As you can see, the function doesn't include any dependency here, which basically means that all the methods here, like start, ready, or alive, or whatever lifecycle thing that we want to expose in there are part of the function signature. And that allows us to, for example, do some other stuff, like adding methods for saving data or emitting events or reading secrets, or even like communicating with the workflow engine and starting some business processes and stuff like that. These kind of signatures basically, the idea here is just to have a clear signatures to execute certain behavior without pushing the platform for a specific implementation on how do you actually implement that functionality. But in here, what we are doing is we are trying to give the developer the tools that they need so they don't actually need to worry about adding dependencies to a database or adding a dependency to a message broker and that kind of stuff, something that the K&A eventing project is also doing. And that's pretty much it. When we run a K&A function, so I also want to show here functions, case we see, let's see, that's a K&A service. You can see that it's running on a URL but if I list the POTS as well here, well, the function is no longer running. There you go, I will just call it so I can see it running. So yeah, as you can see here, we have the K&A proxy running in there but also right now the function project added that like dapper integration so you can see that it's injecting kind of like the dapper side that gives you access to all these infrastructure methods to do extended stuff. So yeah, this is kind of like the experiment that we are working. If you're interested in this topic or if you're a platform builder thinking about providing a platform as a service experience, we would love to get your feedback. If you are a developer and want to help us to create more templates for more advanced use cases, that's more than welcome. And I think that's pretty much it on my side. Yeah, is there any, yeah, I think that we are done with that, right? Like questions? Five minutes for questions. Yeah, we have a question here. Yeah, are they ready? Yeah, actually this is a very great talk. I like the updates. I do have a couple questions. So actually the first question regarding the KNIT function. Yeah. You mentioned the one upload code and on the line, the KNIT has gone to build the image in the end and to run the function. My question is, does that will impact, I mean the performance and increase some latency? Have you guys considered directly to run the source code in some bad manner instance which may have the performance which is maximum? Just wondering the syncing here and because in the middle layer, I know the underlying is running on ports, so the building image may be the one for the step, but in the future have you guys considered to make it more straightforward like directly to run source code and make the speed is pretty up? Yeah, so because remember that we're running on Kubernetes and we're running on top of KNIT, so we need to create a container. I think that we cannot escape that except for something like Wasm, right? We can just run Wasi to create Wasm functions and then just deploy that. That's like under current investigation and that will add another way of deploying functions. So Funk is really good because it actually, it's very pluggable, so you can say build functions in this way or deploy functions in this way or run functions in this way. Go ahead. Let's just add that if you're developing locally, you don't need to create containers, you can actually, let's say for example, if you're using Node.js function, you can use the NPMJS and all that stuff to do your local testing. Oh, that's very nice. Yeah, so the developer experience on your local does not need to be in container, so we provide that local tools for every language that we're supporting. Got it. And the second question is I do notice there's couple of patterns the KNET provided, especially the function and also serving and the event team. So if the user come to the KNET sometime and they want to like, solving some like a website or different using scenarios, which for the amount is multiple patterns is any recommended for users first to choose which one which is, can they most fastest to adopt their use case and is any recommendation or just depends on. Because I think it's mostly like going to serve, right? Usually we see, we see serving more. Yeah, I think like what's interesting about KNET is you can use different components independently. So like being at the booth, I heard some people using eventing but not serving. Some people using serving but not eventing. And interesting enough like you could use functions just to build containers that and then deploy them somewhere else. Like you don't, in using deployments and things like that. So I think there's a composability to all these components that like hopefully like using them together is greater than just using them individually. But I think the choice of what components used will depend on the use case. So I can't really maybe answer that unless you have a specific use case is my guess. Got it, got it. Yeah, lots of questions actually. Sorry I take too much time. I saw the KNET if it's kind of kind of related to more like serverless. So, but I know CNSF today doesn't have the service stack. So I'm wondering is there any plan especially in the future roadmap? I mean, do you have something around this area because I had a discussion yesterday with top TOC members and I am trying to see whether there's a chance to building a tag for serverless specifically. Right now there's not only, I think there's lacking this tag. So just wondering is any plan and any source here on your roadmap in the future vision to better align with the CNSF stuff? I would say if you form a tag around this then please include us. That sounds great actually to have a discussion. So you kind of fell into at least serving to the like the runtime tag but there's so many runtimes and it's interesting but like it's nice to have I think like a use case focus tag in this type of environment I don't know if anyone else has. Yeah, let's know the tag, yeah. We are out of time. Get a sign, but maybe if you have questions, sorry to be come to the front and thank you very much. Yeah, go outside.