 Okay, so let's get started. As a reminder, use the sketched app, rate the talks to select which of the speakers will be the winner for the Oculus or the Nintendo Switch sponsored by IBM. And also, that will give you an entry that you'll be able to write your name. I'm going to raffle for one of the two. We haven't decided which one is for the speaker and participants. So, wait for the noise. So, let's get started with the first talk is the KNCLI. I'm excited to see this one. I'm a fan of the KNCLI, the one-stop shop by David and David. Thank you guys. Hello, everyone. Welcome to the presentation, KN, the one-stop shop for Knative. I'm Naveed, software engineer at VMware, and I've been working on KNCLI. So, I have here with me David. Yeah, hello. So, I am actually David, one of the working group leads in Knative Client, working with Redhead currently and just a background story we used to work with Naveed. I have joined the serverless team in Redhead and unfortunately he left to VMware, but we were colleagues and we are still like good friends. So, back to the presentation then. So, my first slide is trying to answer the common question that could be asked is KN yet another CLI tool and what is the value proposition to use KN instead of Cube CTL and plain YAMLs. So, I think that team was ongoing pretty much with other representations from functions guys and like the developer-oriented talks that maybe not everybody is fan of YAML and YAMLs at all and all the declarative stuff and for that KN is trying to be a bit different with imperative style of commands and with the proposition that we really know what KNative is about and we know the resources, we are typed for it and we would like to help the either beginners but also like veterans that are trying to do a lot of stuff with KNAT resources, deploying services, traffic management, then going to integrations with eventing sources, triggers, brokers, like to cover everything from the first like first class citizen support for KNative. All right, so why KN is one-stop shop? So, we'll see like how to get started with KN and KNative in like a few minutes. So, this is how you install KN. We have it with distorts on brew package managers. You just do brew install KN and it gets you KN installed. As you do KN version, you can see that you'll see what all KNative serving and eventing version it supports. Note that this is showing the versions of serving and eventing that we support, not the ones that you have installed. All right, and let's take a quick look at how you can get started with KNative setup. If you are new to KNative, you have a working docker set up on your local machine. So, we have, you know, I'm going to give a sneak peek into the quick start plug-in that Carlos and Paula has been working. So, this can get you started with KNative in five minutes. So, you install the plug-in quick start and you just do KN quick start kind. You can also do mini cube. So, this is how, you know, it takes. So, just take a look at the last, second last line. It got you a KNative cluster on kind in five minutes and you are, you're ready to go. This is just a quick installation of like, how does the serving and eventing installation looks like? So, we are passed through the installation and let's take a look at, you know, how, you know, KN interacts with KNative. So, it just works everywhere. So, KN strictly works with the API contract of KNative. So, whether you have Kubernetes cluster on any of the Kubernetes provider, you just install KNative APIs, serving and eventing APIs on top and you're ready to go. Apart from that, as David mentioned, like we have rich ecosystem of plug-ins. So, we do have special use case plug-ins which can work with directly with KNative APIs or it can work with the Kubernetes cluster that you have. So, any specific requirements on the Kubernetes distribution that can be expanded to KN via the plug-ins. But KN, if you bring up a, can you do installation on OpenShift or on Tanzu or on EKS, KN can just strictly work. All right. So, going directly into serving support, I will just quickly recap and remind that serving is serving the application workloads for the serverless applications actually and with the scale to zero capability. I don't, well, I don't want to reiterate any that was already said about the serving. So, I will just keep it KN-centric from now on. So, we do offer the standard like CRUD operations for all the serving resources with like an extensive service configuration support and I have a slide prepared with all the flags that we have for the services. There is a couple dozens of them and it might be pretty hard to keep with all the annotation formats or anything you would like to do with your services with regard to configurations and all the options you would like to set up. So, we are trying to offer the convenient way for the developers to power through the help messages and quickly find the correct flag without the additional knowledge of the full formats and so on. So, with the traffic management support, we are also trying to ease up the burden with deployments of new revisions and splitting the traffic into multiple revisions with percentages going to each one of those and so on. That will be hopefully better shown in a demo later. And one of the like things that we really, that are really keen on for the KN is like wait for ready support. So, that's built into our every, like every KN service create command would by default wait for the service to be ready, which seems to be very handy when you are trying to develop a new and create and deploy a new KNATU service and just get that instant feedback if it is working or not, if it will eventually get ready and be prepared for you or there is a problem like an image pool maybe or whatever might be. So, with other features, there is an apply command very much the same as the Cube CTL with that regard that it is able to actually merge our KNATU custom resources. So, it is something extra that it's pretty hard to do with just plain as Cube CTL. And finally, we are not like ditching all the YAML support and there is a way to actually generate a lot of or actually generate those outputs that we would normally do on the back end in a github mode. So, you could store that get the YAMLs, edit them by yourself or just see the plainly see the outputs and use them in a version mode that you would just keep it in your github repository and work from there. So, this slide should illustrate our offering for the serving part of KNATU. As you can see, we have that format of noun as a name for the resource and then a verb to manipulate it. And I will show you a few examples of very basic service create command as first. It's as easy as calling a name of the service and the source of the image you would like to create. This one is update of the service with a new target environment variable and traffic splitting to 50%. So, the previous one would get 50% and the new one created would get another 50%. And as I have mentioned, there is an extensive list of all the flags that almost didn't make it to the slides. And finally, I haven't mentioned that in the first serving slide. We have also support for something like adding a multi-container service. So, you are essentially just building it through normal UNIX pipes. So, container add will actually generate that chunks of YAMLs and will feed it to containers through the pipe. And that can be pretty handy if you are working with a lot of containers that we would like to specify for the one service. Thanks, David. So, now we're going to see KeyN supports all the native resources of eventing that has like sources, broker, subscriptions and channels, the complete credit operations for all these resources. It also supports all the sources that KeyNative provides, the EPS of a source, container source and ping source. And with the extensible mechanism of plugins, we are also able to do the credit operations on the extended sources, for example, Apache Kafka and Kamblitz. So, we'll see like how you can expand KeyNative with plugins to also support these extended sources in the ecosystem. Here we see the KeyN support for all the eventing resources that KeyNative eventing provides. So, as you can see, for channels and subscriptions, we have complete credit operation support, beside on sources, like we have list and list types, commands which will list you all the types of sources available in your cluster so that you can do the credit operations on them. And then since we have like one more layer of self commands for source, so you can work with specific sources in your cluster, like EPS source and binding. And for all of them, we support the credit operations. Here is a quick example to give you a sneak peek, like how you can achieve the diagram that's presented above YK and KMN without writing a single YML file. So, as you can see here, we are trying to build a couple of sources that's sending events to a broker. And then we have a couple of triggers, which will be triggered based on a particular filter that we can embed in the cloud events itself. And then we have a couple of interested things that want to subscribe, you know, that we want to land events to. So, as you can see, we'll start from the left, we'll create a couple of things there. So, the green boxes maps to the green commands. So, you create a couple of services, S1 and S2. And then the broker create is simply KeyN broker create. It will use whatever you have configured by default in your cluster, if it's empty channel or some other broker implementation. Then we create a couple of triggers here, where we say that when a filter on cloud events that are of type, say ping is equal to 2. So, this is the filter that we are using that will identify the cloud events that we are interested in. And finally, we are seeing here the sync. With the sync flag, you say, like, what would be the destination of these cloud events? And with broker, we are mentioning that we want to receive the event from this broker. Finally, we are sending events here with the ping source default source. And we want to specify, we want to send the destination of these events to the broker. And here this C override stands for cloud event override. So, with the default parameters that land in them, you know, that has in the cloud event, we are saying that also add ping is equal to one additional key value in the cloud event, so that, you know, trigger can filter on it. And we are saying this is the additional data that we want to send to it. I think that's all. Once this is in place, you know, your events triggered from the sources are then received in sync. So, if you want to implement all of that using YAML, so you'll have a bunch of YAMLs that you need to create in order, but KN gives you this easy to use CLI options. We'll see the demo of this later. Let's move to plugins. So, KN provides kubectl like plugin architecture. You just need to have your plugins with a prefix KN plugin so that KN can understand it. And you can put it in your path or a directory that you can specify in your config directory. So, with that, you know, KN plugin prefix all the binaries. KN provides a command structure for how your plugin would be invoked. For example, if you are doing a Kafka source plugin, so you would say KN plugin source Kafka. So, you will have KN source Kafka command available. We also have inlining mechanism. So, if there is a vendor who has a plugin specific to their Kubernetes distribution, they can inline the plugin right in KN and produce a binary. We'll see how that's done. Decide like we have a bunch of plugins in place. The quick start plugin that we saw earlier is also one of the plugins that's around. As you can see here, we have an admin plugin that gives you all the utilities to work with Kubernetes cluster, a diagram plugin, a function plugin that we just saw the presentation earlier. And as you can see, the last two ones are the extended additional ecosystem sources of eventing like Kafka and Kamblet. So, you would be able to invoke the plugin like KN source Kafka and all the commands beneath it. All right. So, how do you do the plugin inlining? So, on the plugin source, you mentioned that, you know, these are the metadata of the plugins. And on the KN side, you need to specify, you need to wire this plugin in your KN source. And after once that's done, you just do go mod window. So, it pulls in your plugin source code and builds it together. So, we have a hand script hack build that you can, once you pull in the dependency, you just build it and you will have your plugin embedded with KN so that you can distribute your plugin with KN directly. All right. Over to David. Okay. So, we are closing to the demo side. Here is the like the overview of what we will see on a pre-recorded video. It's a very basic like a processing service broker and even display service just to display the services eventually. What we are going to do is create every resource with KN commands, create triggers to define the relationships, to define filters. And finally, we will try to, there are two implementations and we do have like environment variable feature switch to switch to fast processing for the new implementation. And we will redirect like 50% of it to our service. And if we are finally happy with what we see and what we have tested from that like AB testing, first, we can redirect the whole traffic to the latest revision and how that should work in practice. Okay. I will increase the quality. Hopefully, that will look a bit better. Okay. So, we are calling in a, I'm sorry, I can pause now. Okay. So, we are currently in the KN service create first. As you can see, there is a bit more output than just that service was created because it is running in a synchronous mode. So, we are waiting until the service actually reports that it is ready. And afterwards, we just describe the service to get maybe a bit more details like what's the URL, what image it is running and like the, you can see also the revision split and some of the statuses as well. So, we do the same for the event display service. I won't pause for that. I will just keep it going. So, describe the event display as well. Afterwards, we are creating the broker here. That's not synchronous. So, it is not telling us too much. Just describe command can tell us that it is ready. On the right-hand side, I am setting up the logging monitoring with service log plugin. So, also through everything is done through KN commands. So, as you can see, we are watching the process service log up here. And on the bottom left, bottom right, we are having the event display. So, right now, we are creating the triggers. One will trigger, one will filter messages going from broker to processing service. And the processing service is actually set up to reply back to the broker with different type of cloud event that we will filter then through the event display. Right now, I am going to invoke the event plugin that is very handy to actually test your setup or just to produce events into brokers or into other kinds of things that are addressable. And you can, there is the event plugin is one of the plugins that are released with KN altogether and consumed through homebrew tabs. And there is a build command to build up your cloud events if you would like to have a bit more fields. Right now, I am just specifying the type here. And like one of the fields is KN8Ucon. And hopefully, we will see that processed on the right side. And it took about five seconds. And finally, we see that the message was received on the event display side. So, then we are going to the traffic split. So, there was a traffic split or the traffic flag refactoring recently that allows us to actually mesh everything together in a way that we are updating the service with a new environment variable that will create a new revision. But we are also specifying that for that new revision that will be created, it should receive the 50% of traffic. So, in one command, you are getting like both operations all together. So, through describe, we should see that we have now 50-50 split for the two replicas. And finally, the script is going to create 10 events or send 10 events into our processing service. And you can see that the starting is the name of revision and which revision is like processing the received event. So, there is some split. And the second revision is actually pretty fast compared to the first one and doing more work on it. Yeah. So, that's wrap of our demo. And I think we can go directly to questions or, oh, okay. Sorry. So, the roadmap. What we are actually looking for to do in the client realm. There is a feature track for the context sharing for plugins that would be especially helpful for something like functions. Because functions can be run as a standalone binary but also as a K-native plugin. And the idea is that functions and KN would be aware of each other to share the common things like service name and so on. So, you could operate if you are in a functions context, operate on that, pre-creating functions for your function and so on. Other than that, we have the broker spec management that is going to be implemented that we need to actually really think about how to approach because every broker might have a different solution for the configuration, either from config map to custom resource. So, we need some good UX, how to actually approach it. And from other things, the plugin discovery and management is another big thing. And I think Max is working on the plugin to actually better search for plugins. So, that would be nice to have. And liveness and readiness probe, we don't have that support for serving and we are looking for it. Okay. And now we can continue the questions. That's interesting. We have questions. We have one question. Thank you gentlemen for the presentation. So, last year I had to develop two plugins for KN and maybe some feedback for you or things have changed and improved since then. So, what I was struggling with is standard patterns like listing, JSON formatted output. So, basically I was going back to the KN code base and copied over a lot of libraries and some internal code path to follow the same like KN broker list, KN broker describe. I think there's a lot of pattern that these plugins should be reusing. Can you hear me? Okay. Because you were. And just to make it easier to use for these plugin developers and have documentation or shared libraries like the KN package, KNative package, right? If you build on these sources, I just go to the package and I use the signals package to follow the conventions there. And it would be great to have that for some of the plugins either on a documentation or shareable like libraries code. Same for authentication, plugins need to do authentication and I had to reinvent a lot of stuff. Yeah. So, we do have, we do, we did set up a client PKG on the main KNative organization, but we haven't been doing a great job moving a lot of shared code there. So, like our approach would be that anything that would be valuable sharing with plugins would be consumed from client PKG and client would then depend on client PKG as well for all those common toolings or common utilities that could be shared. So, yeah, it's on the road map for it. And also like contributions are always welcome. We do struggle a little bit with with actually the main power to do that. So, yeah. But it, we are thinking about it and it is on the road map. Thanks for the feedback, Meg. Okay. Good. Thank you. Thank you. Thank you.