 Okay, welcome. We're going to talk a little bit about what's new in OpenShift serverless and some updates about the community. So my name is William Marketo Oliveira. I'm a product manager at Red Hat, and I have here with me, I'm Paul Mori. I lead our serverless engineering team at Red Hat and I'm on Canada Steering. Nice. So let's dive in. But before we go to deep here, let's review some of the first principles that we really use to guide everything we do as far as product and in Red Hat, especially on the cloud team. So of course, we really focus on the openness and working with communities and open standards and driving development with those communities. And you're going to hear some updates about that regarding recent developments in the K-native community. And then on the right side, you have, of course, the hybrid aspect, which is really key for everything we're doing, making sure that the experience that we deliver for those projects as products, they have a great experience on public and on private cloud. And of course, work really well anywhere you want to run OpenShift. Today, we're going to focus on K-native for the most part, like I said. And just to quickly recap what K-native is, again, K-native comes with three main modules. So serving, which is really focused on request-driven compute. So it's a way for you to scale applications up and down, even to zero, based on demand, based on number of requests. Then you have another module called eventing, which is really focused on the infrastructure to send and receive events to, again, start those applications. And then you have, of course, a CLI client, which allows you to interact and build applications using scripts or using your most favorite terminal. Now, the main difference here in where OpenShift serverless is going, right? So we take those bits from K-native, and we extend that with two things for the most part. So one is the ability to write functions as well, which is not something that you have available in K-native per se. I get something that we are doing with OpenShift serverless. And we also package all that with an operator that allows you to install, upgrade, and configure really K-native inside OpenShift so that you can leverage other services from the platform as well. Things like logging, metering, monitoring, all of those other services. That's the main, again, difference when you think about OpenShift serverless and K-native. Now, I'll pass the ball to Paul to talk a little bit about some of the updates from the K-native community. Yeah, so let's talk about a subject that's probably important to some extent, at least to almost everybody in this room, which is open governance. If you follow the K-native project, you're probably aware that open governance within K-native is something that we've been working on for quite a while. Earlier this year, we moved to an elected model for the Technical Oversight Committee, or TOC, and had the first elections. And I want to just take a second here and thank everybody that participated in that election process, everybody that ran, and congratulate everybody that was elected. We now have a TOC that is elected and composed of folks from IBM and Red Hat, VMware, and Google. And since that time earlier this year, where we moved to that elected model for the TOC, we have been working toward adopting a similar model in the Steering Committee. And actually, I have good news because I'm here to talk about the specifics of that new model that we've adopted in a new charter. So for context, before we adopted this charter, the Steering Committee had been in a bootstrap phase for quite a while since early 2019. And the model was basically that there were appointed representatives from some specific companies that were most active at the time when the Steering Committee had been formed. So we had some representatives from Google, from IBM, from Red Hat, and VMware. Google had a majority of seats within the Steering Committee, and members of this committee serve as representatives of their employer, as opposed to serving as individual. So additionally, there were no rules or guidance for how new members would be added to Steering and how we would maintain that committee over time. And one of the things that we heard from folks that were interested in engaging in the project was that the lack of clarity around how you would kind of lifecycle this committee in a way sort of encumbered the project because there wasn't a clear way to develop influence at the level of steering. So in the new charter, we adopted some changes that, I think, address those concerns. So before I talk about those, remember, overall the Steering Committee's job is to develop the community and to help grow the project and grow the organization that evolves around that project within the community. So we have moved to a new model where steering will be elected and where no vendor can hold a majority of seats on the Steering Committee. This may sound familiar if you follow Kubernetes governance. In fact, the Kubernetes governance scheme was one of the things that we looked at a lot for inspiration as we came to arrive at this new model. The size of the Steering Committee is five currently, but we will probably add seats next year if the project continues to grow. So in our new model, members of the committee serve as individuals instead of representing their employer, which is also like a very strong principle within Kubernetes governance. And this allows us when we're in the community and we're serving on steering to act with our community hats on, as opposed to our vendor hats, and allows us to just be centered in the fact that we're participating as individuals rather than having to represent necessarily our employer's interests. So in this first year, and we've literally just in the last couple of weeks adopted this new charter. So we're, this is sort of a work in progress to execute it now, but we will have nominations open soon and elections later this year to begin to cycle toward that elected community-based model. We'll have two seats up this year, and there will be at least three seats up for election next year. And that is sort of the TLDR of the steering changes. One of the things that came up in our community discussions around this, and there were a lot of great discussions within the community. In fact, you can go and watch some of the videos online if you would like. But one of the things that came up as we kind of worked through adopting this model in the community was that the question around what is in scope for the Knative trademark is maybe a better fit for a committee with a slightly different organizational scheme, since trademark is probably most important to vendors. So what we did, and I will just go ahead and advance the slide now, is we have moved the trademark concern into a new committee called the Knative Trademark Committee, or KTC. And this committee is going to govern the scope of the trademark, which currently contains the serving and eventing projects. The seats on the KTC are held by vendors, and members of this committee represent their employers. And currently this committee looks sort of similar to the bootstrap steering, in that we have Google represented as the owner of the project. We have IBM and Red Hat represented and VMware represented. Going forward, the KTC will consider adding new members every year. So this is something else that I wanted to make sure that we touched on in this update, because it helps to address that concern of clarity around how can you, as an individual or as someone making choices about where you spend your developer's open source time, engage with the Knative project and help to develop influence. So going back to the KTC, when they look at adding new members, they'll consider contributions that any particular vendor has made. And there's also a process that allows vendors to articulate maybe some contributions that they've made that are hard to count, because not every open source contribution is easy to quantify. It's sort of easy to think about counting commits. It's less easy to think about things that don't touch GitHub or that are maybe harder to count, like influencing a design discussion, that type of thing. So in the event that you're thinking about engaging with the project, just know that when we consider membership in the trademark committee, that any contribution counts and can be counted. And there's a process for folks that, like if you feel like maybe you did more of things that didn't touch GitHub than things that did, you can articulate that with the exception process. So. Go ahead, William. Yes. So for example, so examples, documentation, or even just participating in some of these meetings that we host publicly upstream, those all count as contributions, right? Absolutely, yeah. And since we know that it's one hard to count some types of contributions, and it's also hard to even foresee all types of contributions, that in the event that there's something you want to make sure the trademark committee considers, you can write up a blurb, articulating what you feel your organizational contributions are. Nice. That's awesome. So the key takeaways I want to leave folks with around governance is that we have significantly improved clarity around the governance in these two different dimensions around the steering committee and trademark committee's composition of those committees, elections for steering and who can serve clarity around how vendors can get a seat on the trademark committee. And of course, great community participation during this process. I really want to just take a second here and thank everybody that participated, both my colleagues at Red Hat and my open source colleagues in the community. I'm really happy to be able to give you all this update today, and I think that community participation was really key to making that happen. So thanks everybody that participated, and if you're really interested in how this particular sausage was made, you can go and watch the videos online if you search for Knative Steering Committee. Nice, yeah, definitely. So elections and everything going on this year, not only in the US, but I guess then also in open source projects. That's awesome, great. Yes. So let's do a quick recap on serving now. And again, the idea here is really just to do a brief recap and Paul and I here, we'll talk a little bit about some of the main components that are part of serving, starting with service. So what's a service, Paul? What's a Knative Service? That's a good question, and don't let the name fool you. It's different from the Kubernetes resource called Service. It's not the name I would have personally chosen. In fact, there was a long drawn out process of arriving at this name in the community that the project had at the time. But when you think of what a Knative Service is, it is basically a very high level resource that is similar in certain ways to a Kubernetes deployment in the sense that it generates these other resources that actually go to do the work. So configuration is basically like the highest level container that we have, no pun intended, from an API standpoint that encapsulates the configuration for the serverless service that you're deploying and the routes that bring traffic into it. So moving down into what those mean, there's a resource called configuration, and its job is to generate immutable snapshots of an application called revisions that the routes that are also specified in the service bring traffic into. So you can think about that service as sort of a serverless flavor of deployment where you can specify both the things that you'd expect to specify in a normal Kubernetes deployment as well as information about how traffic should go into those revisions that are created and how the traffic should be split. Yeah, what I really like about revisions and this idea of snapshot is really this ability to enforce some best practices as far as, again, every time you push a new change to either configuration or code to your application, that snapshot is going to be generated. And that allows you, for example, to do things like maybe you want to generate a preview URL for that particular version of the application, but you don't want all the production traffic to go to that particular version of your application, right? Maybe you want to do what they call a dark launch. So it's just people that know the URL, so it's going to automatically be generated a random URL for that revision. You can influence that, of course, if you want. But that's one of the patterns that I see that is really useful for revisions here. And then another one, of course, is the usual A-B split or canary deployments and whatnot. But this idea of doing live previews of a code that, again, you may not want all the production traffic to consume yet. I think it's really, really powerful, right? And one thing also... Absolutely, yeah. ...is that that traffic split, it can happen, like all this functionality is provided out of the box just with K-Native. But you can also extend that with a service match, right? You can use a service match if you want as well. But that's an option. It's not something that we are imposing in order to perform that traffic split, right, Paul? Yep, yep, and I agree that the ability to just get traffic splitting out of the box is really, really powerful. If I think about my previous industry experience around we want to test some alpha feature and we want to send maybe 1% of traffic to that and just see what happens. I have spent time writing infrastructure to do that, so getting it out of the box is pretty wild. Pretty cool. Yep, yep. One thing also that we are working internally as far as experiences in OpenShift is to make sure that there is an easier path for you to generate those snapshots, those revisions, using pipelines, right? For example, Tecton. So now you have a CI pipeline that can from a Git project build and deploy a new version of your application and automatically generate a preview URL for your app. And maybe you want to post that back, for example, in your PR so that the engineering team or maybe your designers they can see the layout, they can interact with it, and then only after that you eventually promote that application to broad, right? It's really, really interesting. Cool, so that's essentially serving in a nutshell and then of course one thing that we did not touch as far as APIs, but that's just inherent of this model is the ability to scale up and down based on requests, right? So that's what's really triggering this application here and those requests, again, they can be HTTP, of course, but they can also be cloud events, right? They will be wrapped in HTTP requests but the payload itself can be a cloud event which really leads us into the next section here about eventing, waiting for my slides to reload. There you go. Which eventing is really the module that we want to talk more about today because, again, serving was already considered GA in OpenShift serverless since March, I believe, and now we are finally taking into consideration eventing as well and considering eventing a GA module, right? And with eventing, you essentially have the ability to connect external systems to your application, right? So maybe you want to cover some of those APIs then. Absolutely, yes. So let's start with sort of the earlier generation APIs which are single-tenant. In this earlier regime that was developed early in the project's life cycle, early in the project's lifetime, there are event sources and you can think of these as the on-ramps for events to come into the system. There's a variety of these for different cloud services for different middleware brokers. I probably shouldn't have used the term broker or maybe we should have used the term broker in eventing, but when I say broker here, I mean like MQ broker type thing or Kafka. So there are a number of different event sources that are the on-ramps for events to come into the system and of course you can build your own if the exact one that you want doesn't exist. Inside the system, using that like transportation analogy, there's something called a channel and you can think of this as a channel is the road that an event that's come on to that on-ramp travels through the system using and these are basically forwarding and eventing persistence layer. There's an in-memory implementation that's maybe more suitable for development but then there are also flavors backed by different durable stores like active MQ or Kafka and again these are the roads that events travel over and continuing that analogy to just one final API resource subscriptions are what connects events that are traveling in channels to their consumers. So like maybe you can think about the subscription is like the garage door. So you get in through the on-ramp, you travel along the road to where your destination is by passing through that subscription garage door. And the receiver on the other end of that subscription can be a K-native service but it can also be a normal deployment or something else. So those are kind of the first generation eventing resources. More recently we've got what you might call eventing mesh APIs and the whole thing there that we'll talk about is the broker. The broker is an entity that can send and receive messages from multiple sources and subscribers. Brokers work with triggers where the trigger sits between the broker and the receiver of the event and implements filtering. So if you don't want every event that's going into a broker to be received by a particular receiver you can use a trigger to filter those events out. And then there are some additional higher level APIs and we're thinking here about patterns of enterprise integration. There's a sequence that allows you to wire an ordered series of subscribers and sort of generates the channel and subscriber setup that you need to pass from A to B to C to D. And then there's another variant called a parallel that allows you to wire a fan out to multiple subscribers and associate filters with all of those. Nice. So kind of drawing a diagram with those APIs you would get this diagram here where again you see at the top the broker again you're seeing you're seeing all the different sources and the multiple event types going so two one and three there should be a three. But then you see that the broker then is doing the filtering and say hey these types of events I'm sending to this application that's represented by a sync and then this other type of event I'm filtering right and sending to a different sync that this this built-in routing and filtering mechanism is again it's really powerful and it can implement it can be used to implement many of these EIPs and then the channel again just adding a diagram to that the idea here is that you essentially could have multiple sources sending different event types but that channel now will carry that event send all of those events to the subscribers right that's where the subscription comes into play and your application would be a sync there but keep in mind again as Paul said that the sync could be a K-native service it could be a URI right and it could be just a Kubernetes deployment as well so this is what's coming as far as GA and we're going to see a little demo of that as well toward the end but there's another thing that OpenShift Server also does to K-native I would say we extend K-native with functions and functions is something that people often associate whenever they hear the word serverless but what we want to make sure that people understand and one of the key differences between just serverless containers and serverless functions is really what goes inside the container that your application will be running right so whenever you're doing a serverless container or building a serverless container you're responsible for what goes inside that container right we don't we have some very small requirements there for example we want to send those events through an HTTP request but those applications can still receive of course cloud events as part of the payload or just any HTTP request so this is a very good way for you to package or reuse current microservices that you have or any containerized application that you might have that fits this model and run that application as serverless so now that application can scale up and down can receive events etc etc and you can use of course any programming language of choice you can package whatever you want inside that container now when you transition to this serverless functions model that's where you get this extra piece of code that is the function runtime and the function runtime really helps you as far as implementing this HTTP server this wrapper around how you're going to receive those events and how those events are going to be sent to your user code and also because we are in control of that function runtime we can be a little bit more opinionated about it so for example maybe there's something specific that we want to do as far as tracing we can package that tracing capability in our function runtime where in a container you can still of course have some choices and decide to choose your own implementation or go in a different route there so that would be the difference here now looking at most solutions in the market today I would say that again quite often you have to choose between one or the other and they have completely different user experiences I think the main difference that openshift serverless is bringing here is this idea of running serverless containers and functions in the same experience right you have the same exact user flow you can go back and forth you may start with a container and you may see a good fit for benefiting from a function or vice versa you can start with a function and say you know what I actually want to do certain things that I don't have a function runtime for I want to run that application as a container now so you can choose to go either way yeah and I think that's a great quality for us to have because if we look at how folks tend to use functions and microservices that like it's very common to have a spectrum of things that maybe you've got some microservices that that evolve out of functions and maybe some microservices that you already had that you want to you want to get the benefit of event activating those things and scale to and from zero but you also have things that you're implementing as functions so it's nice that we treat them similarly because there is that interplay and back and forth and evolution of systems that we maybe start out using functions and evolve to microservices or decompose a microservice into functions yeah that's super powerful so essentially what we are doing with with functions then right we want to make sure that you inherit and benefit from everything K native already provides so what we are doing is really providing a plug-in to K and K and as a C Li for K native and that plug-in we are calling it fast for now that plug-in allows you to have a local developer experience super important if you want to iterate really fast and you may not have access to a cluster or to the cloud all the time so again you can have a local build experience and iterate but when you build we want to make sure that the way you are producing those containers is also standardized so we are leveraging build packs for that and we are already providing build packs for three run time so park with node and go but that list of course will extend as we progress on our journey from developer preview to technology preview but once you build those containers using the fast C Li you can then of course deploy and when you deploy they become K native services right that's again all the things that we talked about here for serving or eventing they apply and some of the things that you see as well as far as developer experience they still apply as well and then another important aspect like we said is that you may want to build web apps just vanilla web apps you can do that with functions too it's a very common pattern you can implement single page apps or things of that nature but one of the most powerful use cases for functions and servers of course is to deal with events so you can receive cloud events with your functions as well and we're going to see a little bit of that experience in the demo now that I pre-recorded I pre-recorded to make sure I could talk and speak I could talk and not be concerned with typing at the same time let me start sharing my screen here and I will walk through that and then if we have enough time I can do also like a little live demo of our console as well so I'll hit play here so I have an empty directory and the very first command we're going to do let me do a quick pause there is KNFES init and I'm going to specify what type of function that is so that the template that the tool will generate is already configured for that particular type so it could be events again it's going to then receive cloud events or HTTP and then the dash L is going to be used to specify what programming language you want to use for the runtime in this case we're picking so now triggering a build again is KNFES build notice that of course I'm not specifying any particular details about a docker file there is nothing like that that we're requiring a developer to do and here below you can see the template the function itself that got generated and because I selected cloud events you see that the context is sending your application to a cloud event that's kind of how you can extract the cloud event and the data from that context inside your function but this is all the code that you have now in order to process a cloud event not that the build is done I'm just going to perform a deploy and on the left side you see the open shift console so here I already have 2k native services running one part was one and another spring application and they are connected to a channel remember that Paul explained that a channel is this path that is going to carry events from event sources to your application so here I have 2 event sources just for the sake of demo I'm using a Jira event source and a pink source which is kind of a timer just to keep sending events to those applications but now I just deploy the function and as you can see it's going to land in that topology view inside just like any other application and now I can just wire that application literally drag and dropping I mean I can use a CLI as well for that KN provides a few commands that you can use to do the same thing and subscribe to a channel and subscribe to an event source but here I'm literally just drag and dropping using the UI and now I'm going to subscribe that function to that same channel so now every update on Jira or every event that this timer will trigger they will land on this microservice that is built in orcas on this spring application that is also another microservice and on this function that we just deployed using the function functionality of OpenShift server which is built in JavaScript so again very short very simple but still it's a very very interesting and very powerful now one of the things that I want to just call attention to if we can just pause this here is that if you look at the channel that has a little binary type of text on it you'll notice that's an in-memory channel and that is something that is probably workable for you as a developer in production where you don't want to have the chance of lost events you'd probably be backing that with something like a Kafka yeah so let's take a look at the topology view and now I'm going a little bit off script here but just to show you a live cluster as well we recorded that so I could probably talk and do the demo at the same time but first thing here so this experience this visualization that you're seeing here is really the way we are showing multiple revisions for one application and in this case you can see that this application this particular revision has 100% of the traffic and then I have this other applications here that are essentially representing PRs the number of the PR that was triggered that was sent and then triggered a pipeline that built this container as a revision here they all have 0% of the traffic but as I hit those URLs you see that they will start from 0 but they all have unique URLs so again this is the 14 PR 14 this is the PR 15 but if I hit of course the main URL for the service that is going to trigger the one that has 100% of the traffic now for this experience again you can of course click the traffic split using the CLI but we also offer a way to do that using the UI as well so I can say you know what I think this one here will take 50% of the traffic and I want the latest PR that's the one I think it's good to take 15% of the 50% of the traffic and now you see so now if I hit the URL the main URL I'll have a 50% chance to hit any of these particular versions of my application now as far as the venting and to to build on what Paul just said whenever you are creating a channel we offer an experience where again you can just select in memory and that's going to create a one in memory channel or you can select Kafka right and now you need to inform of course what Kafka broker you want to use that was already pre-configured for the venting installation in this cluster and you can literally just specify the name of the broker here so let's say my broker and that's going to be then backed by a Kafka channel for a durable and more reliable persistency right when sending in receiving events and the last piece of share very briefly here is the event source experience as well these are some of the event sources that we have out of the box again you can select Kafka that's of course one very popular one you can just point to the bootstrap server and start receiving events you can pick any of the event sources powered by Camo K so let's say I can pick SQS you provide the configuration here for now this experience is YAML we are working on that to make sure that we have forms auto-generated for event sources as well but it's in a very simple configuration just of course your keys and the Q name for SQS hit create and that's going to create the event source for you so I'll do one for Kafka of course one thing that I have already running in my OpenShift cluster pick the consumer group and here I'm going to select so I could use a URI so this could be any destination any URI it can receive that Kafka message or an application in this case I'm just going to select the previous application that I built and hit create you see that now I have a Kafka source connected to this server application to send and receive events now one of the things that I thought would probably be good to just disambiguate in case there was any question is that when we talk about the Kafka source this would be something that you would use to consume events that were in a Kafka topic so as opposed to Kafka backing a channel where the Kafka topic is a durable store for events this is if you actually wanted to consume events that were in a Kafka topic right William right that's correct as just demonstrating how the experience would be again if you want to connect now your in-memory channel to your application and then eventually land the event source to this channel here as well now I am running 4.5 nightly 4.6 nightly build so maybe there's something here that my drag and drop is not doing properly but you get the idea right I'll go back to the slides now I guess one brief thing that I would just point out is this integration that we did with pipelines as well which again it allows you to produce a pipeline out of the box whenever you're importing an application from Git and I'll just very quickly show you that I know we are getting close on time but let's say I'll pick one particular application here so I'll pick this vanilla spring application from upstream select Java select Knative service and now here I can select add a pipeline so this is going to generate a tact on pipeline that can be used to build my application and you see that again I had not to create this pipeline from scratch this was just part of the in part flow from Git when I hit create it's going to then start a build and eventually you see the build is new the build will be updated to running and your application will be completed here and when I go to pipeline you see that there is a new pipeline that was created and you can still configure that pipeline if you want using either the pipeline builder adding more steps to your pipeline if you want or editing the YAML so very very interesting experience and again very easy to get started as well let me go back to the slides stop sharing my screen real quick so this is pretty much all we had anything else you want to talk about today or is this interesting enough update for folks today well I always have trouble calibrating to that it doesn't get easier when there's no heads nodding in the room that you can kind of look at but I think we've hit folks with a lot of information so I'll just say thanks a lot for watching our session everybody and hope that you can check out OpenShift Serverless and the Knative project we'd love to have you contribute love to have you involved in the Knative community thanks a lot thank you very much and if you have any questions don't hesitate to reach both of us on Twitter thank you folks, bye