 Everybody welcome to our OpenShift Commons AMA, ask them anything, not me, for Knative. And today we have Paul Maury, Matt Moore, Scott Nichols, and Roland Huss, all participants in this wonderful project who are going to give us an introduction, tell us a little bit about the roadmap, and hopefully leave a little bit of time at the end for all of your questions. So, we are live streaming this, so if you are in any one of the multiple live streams like Facebook, YouTube, or Twitch, or I think Periscope even now, throw your questions in that and we will aggregate them here and force these people to answer them. So without any further ado, Paul, introduce yourself and your cohorts and let's find out what's going on in Knativeland. All right, well, I'm Paul Maury, everybody, and I work on Knative at Red Hat. Why don't we just have everybody introduce themselves so you can be sure that you're satisfied with your introduction, and if you're not satisfied, you have only yourself to blame. Matt, why don't you go ahead next? Sure. Hi, I'm Matt Moore, I work at VMware. I'm one of the folks who started Knative now a little over three years ago at Google and I've been doing container tooling stuff for what feels like forever. I'm going to pass the hot potato to Roland. Yeah, hello. My name is Roland Huss. I'm working on Knative I think now for two years and I'm mostly involved in the Knative client project and I'm the working group lead and yeah, I'm doing all the things client here. Yeah, that's me. Now, Scott, up to you. Hey, howdy. I'm Scott Nichols, I work on Knative at Red Hat, formerly Google. I work on the eventing side mostly focused around source creation, so that's how events get into the cluster and make interesting things that you can do with events. And I also contribute to the cloud events, the NCF project. Scott, if I can just correct you, it's news to me if you work at Red Hat, I think you meant to save VMware. I thought I said, well, okay, well, yes, sorry, Red Hat. What? No, VMware. Roland's freaking me out. You know what it is? It's this Red Hat logo down at the bottom here. Anyway, moving on. More show. Moving on. We all work together. Paul, your camera's not on, Paul, so we're not seeing your smiling face. You want to see my face? We'd love to see your face. Hey there, it's me. All right. Yeah, I guess I'll just hold this if you want to see my face. So what is it? One of the things that is sort of in the air in our industry is the confusion between two related but different things. And those two things are serverless and FAS or functions as a service. And I've seen Knative referred to as a FAS a few times. It's really not. And here's how I would put the difference between the two. And any of my co-presenters feel free to riff on me or correct me if I'm wrong or whatever. But I would articulate serverless as being essentially request driven and automatic scale to zero as being two identifying and key properties. So when we think about what FAS is, it's usually a lot more than serverless. I think to me personally, my own opinion, FAS implies serverless, but serverless doesn't necessarily imply FAS. So when we talk about Knative, we're going to be focused on serverless elements. And we're not going to be focused as much on those experiential elements that are the difference to me personally between serverless and FAS. Or FAS is something more than serverless. It's got a lot of connotations of developer experience, builds, and SDLC bound up into it that are more than just serverless. So let's talk a little bit more as I'm double fisting these devices since I've got my phone and my laptop. I'll see if I can operate them both correctly. What is Knative really now that we've kind of maybe close to hit bedrock with maybe if we were thinking it was a FAS we're sort of recognizing that it's not exactly FAS and it's more serverless. Knative is really Kubernetes extension that is focused on developer productivity. So when we talk about extending Kubernetes, I'm sure this will be familiar to a lot of people in this audience. That Kubernetes extension looks like a Kubernetes like API, so declarative API surface, usually implemented with CRDs. And that is what we use in Knative project is CRDs and the accompanying features of cube like web hooks for conversion, for validation, for mutation that provide a Kubernetes like API and API experience. Without adding code into cubes. So we're extending Kubernetes to solve these boring but hard problems like scaling to and from zero and scaling on demand based on requests and having history of revisions for our application and routing events and stuff like that. So things that boring is really not is not being overly generous. We're maybe being facetious when we say boring but hard, but maybe things that you would repeat over and over again. I've certainly implemented some of these things myself in previous lives before Kubernetes. So things that you might find yourself implementing over and over again, they're tough, they're hard to get right. They take a lot of engineering know how to get exactly right. And in my own personal impression, that's sort of the value proposition of Knative is that we're doing a lot of these things that you would have to worry about yourself because Kubernetes doesn't already do them. And giving you the tools to kind of really just focus on your business logic and what you want your application or the system that you're building to do. So there's two key pieces here and actually this this is an outdated slide and Roland I'm just going to apologize to you and the folks that work on the client because I've left off the client but the two key functional pieces that we're going to talk about today and we're also going to talk about the client. Our serving, which is about the like scale to and from zero scale up to end on demand, history of immutable application revisions that we can split traffic between for any number of different reasons that we want to do that. And eventing, which is about connecting in a loosely coupled and late binding way event producers and consumers. I don't want to say too much more about either of these to so that I don't steal the very impressive bolts of thunder that my co presenters have for us today. But that's those will be two of the main focuses and then we're also going to hear about the client. And that's, that is those are the high notes. So, co presenters, do you want to just add anything here before we move on, or is that sufficient for you. I think you nailed it Paul. Yeah, nailed it 2021 off to a good start nailed one thing already. Let's move on to a little history lesson. So, Matt, why don't you why don't you do the first couple bullets here because Matt is is one of the founders of K native project. Sure. So, this is, this is, you know, there's always a lot going on but this is sort of a highlight reel of some of the sort of major events throughout the course of the project so back in sort of fall of 2017. We started some of the really early prototyping of, you know what Paul described trying to sort of look at what higher level abstraction on top of Kubernetes for, you know developer productivity serverless fast fail thing would look like and publicly, you know, we, you know, tons of folks joined in red hat joined in pivotal joined in, you know, lots and lots of folks were, we're discussing it and it launched publicly in July of 2018. And a lot's happened since then so at the time we also had another area called can you to build, which was intended to help solve the sort of source oriented nature that people sort of traditionally think of as being a key part of fast workloads. And in March of 2019 that that's spun out as its own project, which you may know now as Tecton. So, you know, other major milestones the serving API had their v1 revision in September of 2019. And one of the one of the big things that's been sort of a recurring theme laced through some of the more technically oriented things in here has been the topic of sort of governance. And Paul's been, you know, one of the one of the big advocates of this on steering. In May of last year, we had our first to see elections. And Marcus and Grant joined the to see and we now have a sort of vendor neutral representation no one company has more than two of the five seats on the technical oversight committee which was a really interesting milestone and the sort of open aspects of currently thereafter in summer of 2020, the eventing API went v1. And in November of this year, this past year, we had our first steering elections where Paul won one of the elected seats and and I guess was one of the other folks who started the project. A few years ago, one one of one the other elected seats so we now have both a steering committee and a technical oversight committee where no one no single vendor, you know, has all the say, right. So, it's very exciting times for the project. Yeah, one thing that I'll just add is that when Matt talks about the technical oversight committee and steering committee is that in addition to the vendor neutrality element that Matt described the folks on those committees are serving as individuals not as employees of their vendors so in this in the sense that vendor neutral in the sense that like you, we can't have more than two people employed by the same vendor. Those folks that are in the committees are not acting on behalf of their vendor they're serving as individuals, which I think is an important point to mention. Thanks. Thanks for the history lesson Matt. Anybody else want to get critical detail that I'm already thinking for granted. Yes. Anybody else, any of my co presenters want to add anything to this slide. Excellent. Okay. Alright, so we got the first meme of this presentation. Cloud events. I had a couple different variations of this meme. One of them is the one that you see the other one said cloud events I wanted to put Scottie's face on it but I didn't have enough time. I think Dan pop generously offered to do it and then I forgot about it. So this is what you get. Well, maybe make one that's even more funny, but wanted to say a few words about cloud events. I'm not sure how high the name recognition is for the cloud events project, but the cloud event format is a vendor independent standard for event metadata that that is actually a CNCF project. Scott, in particular, I think from the group of co presenters that I had today is very active in the cloud events space. So, Scott, you might want to add add stuff to what I say after I'm done. But the reason that I mentioned cloud events is that when we talk about things being event activated and event driven in Knative and, you know, in particular eventing. There's probably a fairly obvious connection there with eventing. The message format that is the lingua franca of Knative project is the cloud event format. And it is supposed to facilitate interoperability between different producers and consumers and be a vendor neutral format that can be adopted. And it's what we're using as an event format in Knative. Scott, do you want to add anything to that? Anything important that we should know before we continue our journey through this introduction to Knative? I think the only missing important bit about cloud events is that the specification describes how to turn the core nugget of your event between protocols and back to this protocol-less version of that event. And so the reason this is important is I can go and write my fads, have it based on cloud events, and then all of a sudden I'm not locked into the protocol I choose when the project started. So Knative serving is really about, you know, adding this missing layer to Kubernetes to make, you know, functions or containers scale easily. Eventing is really about kind of choose your protocol of how you do transport later, because we help turn these events from these cloud events into other protocols. So you could be running in Kafka in production, but maybe Nats on your desktop because it's a lighter weight to run or something like that, or even pure HTTP. So we get away with this because we depend on cloud events to be this kind of like neutral converter between these protocol-specific eventing formats and this conical form. Right. Well, let's talk about, let's do a little bit more in-depth on serving so we can learn a little bit more about how that scaling works. Matt, I think this is your slide. What does Knative serving get me? Sure. So I think you framed this well at the beginning, right, talking about a lot of the stuff you have to do in terms of sort of, you know, there's a lot involved to launch a production service on top of Kubernetes, and a lot of, you know, I'd use the word boilerplate in terms of the kinds of things you need to set up to sort of operate a service, right? You have deployments, you have services, you have ingress, you have HPAs, you have all of these things, right, that, you know, you need to do when you're adding new services. And as folks shift from sort of, you know, big monolithic applications to, you know, the new hotness, microservices, or even functions, right, you end up needing to do that a lot more, right? And so the way I like to think about serving is sort of reducing the incremental complexity of launching new services and, you know, having this goal of enabling developers to effectively focus on, you know, the business value they want to provide in those services, right? So really with K-Native serving, what you bring is just a container image that has your HTTP-based application in it. And what you get is you get a DNS endpoint for your application, possibly exposed externally, you know, we, if you've configured automatic TLS, these will be TLS terminated endpoints without you, developers having to do anything. As Paul mentioned earlier, as you, you know, create changes in your application, each version of your application is stamped out as what we call a revision over time. The next slide sort of illustrates a little bit what makes this a powerful concept is, it enables you to reason about sort of versions of your application over time. And this is most useful when you want to, say, canary sending some traffic to a new version of your application or all of the traffic to a new version and roll forwards and backwards in time depending on your production needs. You go back for just one second. I just want to make sure I, yeah, okay. And I think the two other really interesting things are request-based auto scaling. So as your application gets more or less traffic, and this may be because you are rolling out a new version or not, we will basically right-size your application and, you know, have 10 replicas, 20 replicas, or even zero replicas depending on sort of the volume of traffic your application is serving. The last thing that we do that I think is really interesting to call out, and this is to enable those sort of fast-style use cases or what folks think of when they think of fast, right? With your lambdas or your Google Cloud functions or your whatnot, a lot of these fast models have this ability to have the sort of runtime layer take care of concurrency control. And so if I want to say only let one request through to each instance of my application at a time, you can do that through this idea of container concurrency. And one of those things that we have built to your application doesn't need to deal with, you know, concurrency control, which can be tough to get right, especially when you start to blend it with things like load balancing and auto scaling and getting really good performance out of some of those things. So yeah, so the next slide really illustrates the resource model here. So I mentioned the sort of one resource that you need to deal with for launching new services. This is the service resource. In its simplest form, you pretty much just give it a container. This exposes an HTTP endpoint. Under the hood, it creates a what we call a configuration resource, which tracks the sort of latest state of configuration for, you know, what is running and those revisions are that history of changes to that configuration resource. I like to make the analogy of revisions are sort of like get commits and configurations are sort of like the floating head of your get branch. And so, you know, as you make changes to your branch, new commits happen, but the old commits are always there and so you can always sort of reference those older commits if you need to. And so the route is what controls where traffic is spent over that history of configuration of revisions. And so you can either have us automatically track the latest or you can, you know, if you want to sort of take complete control, you can, you know, control percentage based were allowed to, you know, to distribute traffic across some number of revisions. And, you know, do 1%, 2%. You can even do 0% splits and do what we call tagging if you wanted to sort of pin but qualify a new revision prior to sending it any of your sort of main traffic load. So you can do some very powerful and sophisticated, you know, canarying and qualification prior to rolling things out. But the configuration to do these things is ends up being typically quite quite small. And so we take a lot of the complexity out of some of these things that, you know, can get very, very complicated. So next slide. I think it's our next meme if I'm not mistaken. Okay, yes. So this is this is my favorite bit of a native flood right so when we first launched this was actually true. We did actually need Istio but one of the pieces of feedback we got pretty quickly was that. You know, there are other networking layers out there. You know, some folks don't want sort of the full mesh style networking layer some, you know, just want to deal with, you know, ingress style networking and so one of the things that we did was we built an abstraction between sort of that sort of describing what we need from the networking layer where, you know, Kubernetes ingress wasn't quite cutting it and I think everyone has sort of accepted that to ingress v, you know, now v1 it was v1 beta one at the, you know, at the time and for what seems like forever. But so we built an abstraction that, you know, let us describe sort of our needs from the networking layer. And Istio became one of those options that we have now half a dozen or so integrations like career and contour and glue and Kong and ambassador. And these are all in our sort of install instructions and frankly I, you know, I'm partial to contour but like I haven't run Istio in, I think over a year for most of my native development and, you know, it really it's just an implementation detail and you can choose any of those options and a lot of them are actually, you know, really great and I think, you know, comparable to Istio which is our oldest integration. So, so yes, we do not need Istio, Istio is just one way of running Knative. Do you want to add anything to that Paul. You know, the thing that I wanted to add before we move on is just that, and I don't think we touched on this but it's important. The service API and configuration APIs are subsets of the pod spec. And so when we see the demo later in our talk. Let's let's just make it make sure that we highlight that so that people can see it in action. The reason I mentioned it is because I suspect that a lot of a lot of deployments that people have today would translate directly into services if folks wanted to try that out and see how their existing deployments can invent driven mode where they're scaled down and back up depending on load. But otherwise, I think you covered it very thoroughly. And I will just go ahead and advance the slide now to the roadmap. Okay, so this is this is a sort of case of some of the things we're working on. Excuse me, sorry, I had to tickle in my throat. So see, these are some of the things that we have sort of cooking in various stages of development. One of them is domain mapping the idea of being able to assign sort of a vanity URL to your K native services so you don't have that food up our example.com you can have like, you know, my awesome blog dot map more.io. And, you know, assign, you know, proper DNS names with now TLS termination in front of your K native services. So this is an alpha I believe it's available in our dot 19 and our dot 20 releases which want dot 20 is not off the presses dot 20 added auto TLS to it. This is an alpha API. So if folks want to give feedback on this, we would really appreciate it. It is magic, by the way, it's, it's one of the coolest features that K native ship in a long time. I am so excited about it. Basically what it results in is TLS terminated pet project domains across your cluster. It's amazing. That's finally something to do with all those domains you've been buying. One of the other really cool things that I like that's been this has been in the works for a few releases now is this idea of gradual rollout. So we support very fine grain traffic control where you can, you know, take revisions directly and split across them with pretty fine grain. But one of the most common modes we see folks doing and this is, you know, when you're getting started, you sort of just want to roll out to the latest all of the time. But depending on how much traffic you're getting if we just shift things over in one big swoop, you know, our ability to scale from zero to, you know, huge number might be, you know, limited by factors at the Kubernetes layer. And so what the gradual rollout project is doing is basically making it smarter about sort of being able to shift traffic to that over some amount of time that you have specified. And that way, you know, as you start to scale up to bigger, you know, deployments, you can, you know, not drop traffic as you're rolling out new versions without needing to, you know, do your own whole complex orchestration. One of the last things I wanted to touch on, you know, I mentioned that networking layer that we created as our own abstraction because Ingress V1 wasn't quite good enough. We've been engaging with the Ingress V2 efforts in upstream Kubernetes to make sure that when that lands, excuse me, when that lands, it does meet our needs and we can retire that abstraction and leverage, you know, just raw Kubernetes to do a lot of what we want. And then two aspects that we will be sort of pushing on forever are scaling in every dimension you can imagine, as well as request latency. So I will hand things I think back over to Paul. I think that's my... Yeah, we've got a couple questions about serving in the chat here. The first one is from Dan. Can this integrate with Helm for deploying Helm charts, rolling back charts, or does it replace it? So I just want to make sure I understand. So I think the question is, can I use Helm to roll out KMATive services? Is that the gist of it? I kind of had a different reading of that, but I think the question that you read out of it is an important one to answer. Sorry, can you guys hand me that one first? Yes, go ahead. So yeah, I mean, that is an alternative reading, and that's fine, I'd like an answer to that as well. But it was more, I saw you talking about the way that you've got this idea of a canary, a rollout and a rollback. And I kind of... We've been expecting to use Helm to do that, and I'm just wondering, is this an alternative? Because we've committed quite a bit to model our applications to be deployed as Helm charts, and I just wanted to, does this integrate with it, or does this replace it as a mechanism for rolling and upgrading between versions of services? So that's a good question. So I don't think we do anything specifically to integrate Helm more deeply than what you can do with Helm and our YAML, and you should be able to use Helm to roll out canary services in much the way you can roll out other resources. But I think we haven't done anything to sort of integrate with Helm more deeply with respect to awareness of its revision model. But I think in principle, nothing stopping you from leveraging Helm to manipulate K-native's concept of traffic control. One of the things we did introduce is we have this more sort of sophisticated way of sort of doing fine-grained control where you can rather than just having us generate names for each new revision, you can bring your own name for the revisions, which allows you to sort of predictably stamp out new names for revisions, which you can then use in traffic control. And so that allows you to do, I think it allows you to do more like what you were describing, but there's no deep integration if that's the question. And so would you expect to deploy different revisions to different namespices, or would multiple revisions be deployed to the same namespace? Typically, there's a best practice kind of thing. So today, all revisions need to live in the same namespace, and most of the resource model within serving expects things to live within a single namespace. And to some extent, it's designed so that if you were leveraging namespaces as your tendency model, you should be able to hand users credentials to manipulate K-native serving resources within that namespace, and they should be able to operate productively, if that makes sense. Okay, great. Thanks for your time. There's one other question that I'll call out for now. It's not the only question in there, but it's the one that's closest to serving. And the question is, will, let's see, when is there an ETA on functions coming to K-native? And I would say at this point, there really is not an ETA that we can give. We had, like in our community, we had maybe a couple times the subject has come up, but we so far haven't, and I think there's a very great interest in having a concept of functions that is on top of K-native that's community-based. But we so far haven't been able to agree, I don't think, on an approach. So I can't really give an ETA now. It's definitely on our radar. And I appreciate the question being asked. I think what would be great from the, if the person who asked the question is very interested in it, I'd love to get a note about that to the K-native dev or K-native users mailing list. They're K-native-dev and K-native-users, and it's home in Google groups. I'd love to get some surfacing of that in there. I will definitely surface that it came up, but so far we can't really give an ETA on it. And I think in the interest of time, it's probably best to advance the slide now, and I think that will be eventing. So Scott, why don't you take this section as the eventing rep on our little call. Thanks, Paul. Okay. So what is a K-native eventing get me? That's a good question. I can read the slides here. So we're enabling async app development through event driven from anywhere, loosely coupled and late-bind producers and consumers, producers generate events before consumers exist. So basically, eventing has a hard problem because there's, you know, 20, 30 years of eventing history in compute, right? Like, serverless is fairly new and there's no real like cookbook patterns of how you cook up a serverless containerized thingy. But eventing patterns and messaging patterns have been around forever. So we kind of got to step back and say, what does this look like in Kubernetes land? And the answer is something that is late bound. So you have a reconciled loop that's constantly healing your consumers and producers and repointing them at, you know, my consumer moves to a new cluster URL or a new cluster or it resolves to a new address or gets deleted and recreated somewhere else. And that being able to heal the cluster's eventing mesh is something that we really focused on around in eventing. One thing the slides don't really say is eventing's really broken up into a few different big major chunks. We started out with messaging. We have a messaging API group and it kind of, it puts a thin abstraction on top of, like pub sub components. Turns out that's really hard to build with because it's very imperative on how you would assemble your cluster. So we came up with a second model that sits on top of that, that can leverage it, but it doesn't have to, we call eventing. And eventing is more like, actually, can we go to the next slide? Can we kind of show? So we, eventing brought in this thing called the broker. You can think of this as like the ecosystem or the mesh of all the events that are flowing through your eventing system. You pluck events out of that mesh using a trigger. Trigger points to a broker, has a filter. You could consider it like a query. Once that query matches, that event gets copied out of the broker and delivered to a subscriber. And we have a bunch of magic here to let this be discoverable and late bound and self healing and things like that. Now, one, one thing that I'll just mention here is that the, the targetable doesn't have to be a K native service, right, Scott? Yeah, yeah, okay. So the, as we were developing of eventing components, we kind of had this idea, I think, Vile and I were talking to Matt and we hit upon this idea that will potentially K native serving. We don't, we don't want those components to be coupled, right? So eventing knows nothing about serving. It's independent, but they do share some common interfaces that we call duct types. But basically, the trigger can point to this duct type that we call addressable, which basically says in your status, you have this place that you'll expose your URL you would like to be invoked on, right? So anything, any CRD could actually implement this. And a lot of the eventing components implement this contract. Serving implements this contract. You could write a CRD that implements this contract and have it just kind of wire up and play. Or the trigger takes a URL to, which could be in cluster or external to the cluster. So, yeah. This K native eventing is completely decoupled from serving, but it works really well with serving. All right, ready to move on to the roadmap slide? Sure. So eventing's roadmap, we're working on stabilization. There's a lot of features that they work, but they could use more tests, and those tests could be a little more stable. So we're really focusing on that right now. We've got several experimental features that there's a link, but it's, there's a bunch of experiments. So because eventing's been around and serving, sorry, eventing and messaging have been around so long, there is a lot of ideas around what we, where should we go and what should we do. Eventing doesn't really want to have to take an opinion there. It's, we're trying to enable all of the A star patterns that you could invent and implement using this thing. So where serving brings you really easy Kubernetes gale to zero containers, eventing enables this really easy shim on top of other protocols to help you decouple your choices so that you could make different choices later without having to recreate your entire application, right? But that, that thin shim needs some more features like maybe some smarter filtering in the triggers or improving the reply contract. So like, how do I know in the data plane that if I'm going to invoke some subscriber, how does that subscriber understand that it can reply to the broker to re-engress and invent back in? So why would you want to do that? Well, we had this interesting thought. What if the broker allowed you to reply to events and then those new events that you're replied with gets ingress back into the broker so you never have to know which broker invoked you, right? So smaller footprint, smaller, more reuse of your deployed components, things like this. We are in eventing... Sorry, I thought you were done. Go ahead. So then in the next six months, we're still catching up on the auto scaling of the eventing components. Still working on that. We're partnering with projects like Cata to look at, well, pole-based scale models, maybe Cata is the way. And so we're adding hooks and plug points and some standards on how you get your eventing components to scale with external things like Cata. We're looking at, well, so we make CUBE events to understand what's going on. And we make these cloud events that are in the data plane that your application is making. But we're kind of looking at, well, maybe there's this third category kind of in the middle of what would my application say if it could make events and then how would I route those things and react to it? So event driven models for your application and all of the components that enable your application is this. And... Cainative eventing is... It's kind of an implementation of the cloud events specification with a bunch of other opinions. One of the things that we're working on in cloud events is the discovery and subscription APIs. And I think you're going to see that trickle down into Cainative in the next six months or so. Well, thank you, Scott. Roland, you're up. Why don't you tell us about that fly CLI? Yeah, sure. Yeah, the CLI actually. This is a component which opens you with the door to Cainative Wonderland. So if you go to the next slide, please. It's really about how you can access Cainative from your CLI actually. Of course, you can do everything with your source files as well, but actually a dedicated CLI for Cainative has some advantages. So you can distinguish between two modes of a Rundee. So one is the imperative mode so that, as you know, from Qt Control as well. And you can actually manage nearly, I think, all of the Cainative core entities which are user-facing directly with quad-operations so you can create them. You can make updates and, of course, list them in varied details in a human consumable format. And of course, you can delete them as well. So you can group them also into different areas like we have for Cainative serving. We know how to manage services. We can create, manage Cainative services and also revisions. And also for eventing, we have different, yeah, for every entity, you have kind of a noun so it's all with the same schema. So you have a KN, then you have the noun and then the verb, what you want to do. For the sources, actually, this is also a very important eventing component which is kind of an adapter which main purpose is more or less to convert custom or events from outside into a cloud event format and which are kind of feed into the eventing infrastructure. So for that, we have also direct support for all the sources which already come directly built in with Cainative eventing, sorry. But we can also manage other sources which we can leverage by using plug-ins. But beside this imperative management, which is very nice if you want to build up your services incrementally, it's like you say you're in the CLI, you want to add something, you want to try out something. And then finally, you can also export all the stuff that you have done from the cluster into a file. And then actually also just take this file to another cluster and import it there or create it like this group controller also with Cain. But there's also this so-called declarative handling of Cainative services which allows you really to describe your target state that you want to have. And this has the same semantics like cube control apply, which means you have get a three-way merge with the stuff which happens in the meantime between two runs of apply, for example. So it includes the same way and actually it even reuses the way how cube control does this merging. And also borrowed from the cube control architecture is the plug-ins that are similar to cube control. There's one thing which is, I think which is in addition to the way how cube control handles plug-ins. So plug-ins include control just external programs which are executed by executing it from like a comment. So it's a separated process for that. But you can also create in-line plug-ins, which means if your plug-in is written in Golang then you could also make a separate own build of KN and then inline it. So this is quite nice if you want to have kind of a single binary which includes a certain amount of plug-ins and we are currently working also on a cube on a KN builder project which allows you to declare the plug-in that you want to include and then just run that and you get just one blob of binary that you can execute with all the plug-ins included. This of course only works for Golang but the regular plug-in architecture works for any language of course. Then we also added recently Github support as we call it which means we have dedicated tecton tasks that you can reuse in the tecton pipelines for deploying your creative services and brand new, fresh from the press is an offline generation of resource files so you can actually operate KN against your local file system so you do not need to have a direct connection to your cluster but you just add an option minus, minus target you provide a directory or a file name and then it just creates the resource files directly from the arguments that you provide to KN. So this is a very easy way even if you do not have a cluster at hand but you also do not remember the schema of the KNative services of the KNative resources then you can just reuse KN use the help messages and use some arguments and this will build up for you the YAML files. This is very, very convenient and then of course you can take that file commit it into your source control management system and go ahead. This is very, very nice so we just have the support for KN service create but actually we are also continuing this theme by adding it to update and to list and so on. So really this is really a very which I'm pretty excited about. Yeah this is a nutshell is what KN can do for you actually it's really you can everything it can do with resource files as well but it's much, much more convenient in my opinion that you can use KN on the command line you can install it easily with Broom with your Mac on S or Linux you have also Broom for Linux support you can download it from the GitHub release pages and they are released in the same cadence like the rest of KNative so in the six weeks cadence of KNative components and because I just saw the question whether KNative CLI is stable I can say that it's pretty stable it's a little bit for KN it's a little bit different because it supports eventing and serving and so for example as there was some point in time where serving was stable but eventing was not and in this period we had support for both of them but we marked of course the eventing support kind of experimental we also have other features which are marked experimental but otherwise KN is totally stable it's included also in products like an open shift it's already shipped with that and you completely rely on that yeah that's a KN in the chat yeah go ahead let's hear about the roadmap and there's a question that pertains to this that I'll give you once you go through this stuff okay cool yeah what are the roadmaps in the future we of course want to continue on all the topics that we have started we want to support more sources and we also want to support arbitrary sources of sources that are not known when KN was compiled or built and we want to leverage metadata that are offered to us so like CRDs but also the the KNative Discovery API which is easier to consume for us because CRDs are a little bit hidden and typically only meant for administrators so a regular user is not necessarily able to read CRDs and based on this meter information we want to support different command line arguments so we really want to offer dynamically command line arguments that are based on the type that you are managing so this is kind quite challenging but actually this will give you a nice use experience I'm pretty sure then another thing which I'm very excited about I also called Camelette so if you don't know Camelette then no problem because it is really brand new technology it's based on Camel which is also an enterprise application integration platform and the good thing about it is that it comes with around 300 plus components that you can reuse as KNative sources which means they can connect to external systems and even if you don't know nothing about a using Camelettes you can leverage all this existing stuff directly and just use this KNative source for example to connect to systems like telegram sales for service now whatever you want to and Camelettes will convert all these events to cloud events and yeah so support for that will be implemented as kind of plugins as well and then of course we always are looking for new plugins in the KNative sandbox so KNative sandbox is kind of a mating port or something extension which is not really part of the KNative core directly so it's really something where we also put different KNative plugins into and two of them will be a lock plugin which allows you directly to print out service logs like you know from Stern for example but also another plugin like directly creating events locally on the command line and injecting it into the eventing infrastructure which is very convenient for testing and debugging and finally and we are always trying to improve here is user experience but we also rely on your feedback for that so actually we know some weak points in the user experience for example the way how we can specify traffic splits which you can definitely do with KN but we feel this can be better and yeah so we are going to improve on this story and yeah so this is the roadmap which we will work on in the next six months I would say alright thank you very much you know listening to this stuff that we're all talking about in this presentation I'm like this sounds pretty cool but will it blend the Scott I believe you got a demo you're going to show us to establish whether or not it will blend here's where I will shut down the screen sharing and Scott go ahead and you can share your screen and we'll come back to this deck in a moment okay here we go you need to turn it off there we go alright I only got a few minutes so you know I'll just use this brick and extra effort cool okay here we go so I have been talking to my friends over at Falco if you don't know what Falco is it's a thing that watches this events that are interesting and turns them into web hooks there's this other project called Falco sidekick which turns those events into some other thing and I was like well you guys don't really have cloud events there and so I helped them at it and here's a demo of using Falco and sidekick to do some stuff so so let's see first off let's take a look at the graph here so in my cluster right now I have sync binding that links the Falco sidekick into the ingress of the broker so remember the broker only consumes cloud events so cloud events are going to bounce around in there and then I have this trigger to send anything that's from Falco.org with the type of Falco rule output to this sockeye service so what the heck is sockeye it's another fish program I wrote it just shows you the stream of cloud events that are invoked using a web socket between the cluster and this web browser so what can we do we can let me just grab this little exec here so I'm going to exec into my SQL pod that I have running you can see it right here so I'm going to exec into it I'm going to do an interactive terminal cool I've got one and I also got this event here that says terminal in shell that seems like an interesting event I don't really want people to have interactive shells on my cluster here so so I made a very, very, very simple application all it does is it listens for a cloud event and it it does a kubectl delete on the pod that comes in very simple and to implement that I get some R back to allow me to get and delete pods I wrote a k-native trigger that says for things from Falco with that same rule with the rule text terminal in shell send it to this drop service which is a k-native serving service and the yaml for that is here so a couple things to note I'm asking for it to be only cluster internal because I don't really want to expose the pod killing device to the internet, that would be real bad I'm going to show you because of hard mode co-inaction so here we go, we're going to deploy that one thing I'll just add as Scott is doing that is if you look under the template part of the services spec that is in the top pane there on Scott's display you can see that this looks pretty close to a pod right Scott? No, it doesn't look like a pod it looks like a deployment pods don't have the template under the template field if I change this to deployment and the correct and then add to the bunch of other like a Kubernetes service and some other stuff then I would have the same setup it just wouldn't scale to zero it wouldn't have auto TLS so I can see that I've got my drop thing it's cluster local it's ready to go and so now let's go and exact back into that fun SQL pod and we oh, so what happened here the Falco detected that somebody created this the terminal in shell we got another event here terminal in shell if we refresh graph we can see what the new graph looks like so we still have the Falco sidekick ingressing to the broker we have another trigger for drop for this canative service so now we can see the event stream that's coming through the broker but we can also cherry pick that the this terminal in shell and send it to the drop service which goes and invokes death onto my my interactive shell that I maybe I'm trying to do some malicious things I whip this up super quick it's not a ton of code I got to show you which is cool any questions we'll send it back all right thank you very much I'm going to share my screen again now we'll just wrap the remaining slides of this deck up there we go okay will it blend it blended so that was good let's let's talk about 2021 girls real quick as we're we're running over and I thank everybody that's still watching so number one my own personal opinion here is that we have the canative apis are at a v1 level and v1 meaning you know good expectation of backward compatibility I think that for me in my experience getting a project to a 1.0 level is a very important psychological threshold for adoption and you'll see adoptions also on this list so for me personally I'm hoping that in the first part of 2021 that we can declare that canative is 1.0 we also I think are really interested inside the community of folks that develops canative and having more integrations and the one that Scott has just showed us is a really great example of the type of integration that I'd like to see you know the more things there are that spit out cloud events and can consume cloud events the more utility canative is going to have for everybody so if we think about how do we make something that is most useful to everybody the more integrations the better and Matt I think you put the improved UX on here so I'll let you speak to that yeah sure so one of the things that we've started to do is there've been a bunch of sort of user interviews where we've been talking to folks looking to get started with canative and rumblings that we might start a user experience working group to sort of look at getting started as I think is one of the really important sort of journeys that a lot of users take and we want to look at a bunch of these and make sure it's as streamlined as possible so that we can get folks from the point where they have a blank cluster and they're like I want to try out canative and get them to that sort of aha moment like this is what canative does okay as quickly as and easily as possible so that's what I meant by improved UX sorry I just threw it on there before yeah no problem of course we want more adoption and I see there's a question in the chat from from somebody on YouTube any update on how serverless is being adopted in the community these days and I can speak from the numbers that we track for open shift serverless which is the red hat product derived from canative that we saw pretty good adoption growth in 2020 but my own personal opinion and this is why more adoption is on our 2021 goals here I think that in general serverless is still a fairly advanced topic and you know if you think about the growth of the Kubernetes community that kind of looks like a hockey stick right and what that tells me is that if we think about the appetite for advanced topics that probably there are still a lot of folks that are beginning their Kubernetes journeys right now I expect the demand and adoption opportunities will grow and it's coming from the frame of mind of somebody who's serving on steering and wants to help grow the project I think there's a lot more adoption out there to get us so that's definitely something that I think that will work a lot on in the community and we also really want to grow the pool of contributors so I'll just take this opportunity to say you know if you're if you're watching and you think that this project sounds interesting to you and it might be something that you would be interested spending your time on however much time you had to give I would just say that like I think we have a really great friendly community in Knative and we're also really interested in growing the pool of contributors so I would just say like if you have any thoughts about you know you want to contribute but you're not sure what you could do maybe you're not as focused on code I would just say that like I think that there is something for everybody to contribute in open source and I'm really interested if you have the urge, if you have the interest, if you have the desire to contribute but you're not sure how you could do it I would love for you to ping me and talk to me about it you can hit me on Twitter at Cheddar Mint and you can also get me PMORIE at redhat.com and I'd love to talk to you about how you could contribute we definitely could use your help and it's a lot more you know when I think that probably in this audience like there's maybe an unconscious bias toward thinking of like open source contribution as being all code and that's just simply not the case so if you can if you can read documentation and tell us what did or didn't work for you from that documentation that's contribution that would be very valuable for us writing docs participating in things while you're going on pop over to the landing page for Knative so people know where to find the whole project sure it's a very easy it's a very easy URL to remember it's Knative.Deb Knative.Deb and that's a good jumping off point to find any number of things some of which are on this next slide here our Github organization is called the Knative organization there's a link in this slide that you can click when we share it and I'll share it later on on Twitter and some of the other channels that you might be looking at for these things so you can click that link but again you'll find that from Knative.Deb and I think someone else trying to talk go ahead yeah to join the Slack it's slack.knative.Deb that'll get you an invite code to come hang out with us I put the wrong URL on there I believe that is the Slack workspace the other URL slack.knative.Deb will get you an invite link you can also ping me on kuberneteslack matmourn.oe and I'm happy to share invite codes too if folks need them I'll annotate this and correct that slide and I'll upload the video to the OpenShift Commons shortly and tweet that out with the slides so with that I think we need to end and wrap up and respect everybody's time and really thank you all for coming and all the work that you do in the Knative community it's wonderful to see your faces I think each one of you probably could do an AMA on your individual topics so we probably will have you all back in the coming months so thanks again for participating and we will be back again with another AMA next week on a topic to be decided still so thanks Paul Matt, Scott and Roland be safe everybody and take care thanks a lot for having us have a great day everybody bye bye