 Good morning. Good afternoon. Good evening wherever you're hailing from welcome to a special cube con EU office hours Session today or this session we will be talking about serverless and serverless functions with our k-native team And I'm very happy to have the k-native team on I don't feel like we've had enough k-native on the channel lately. So it's very good to have you all on Josh Marcus is the one that actually organized most of this. I know he was just about to take a sip of his drink So Josh if you want to go ahead and do that, please do and feel free to introduce yourself Everybody so i'm jess berkus I am in charge of red hat community for all things cloud native And that includes k-native and serverless stuff Hopefully some of you just watch the serverless workflow spec Session that just ended And we are going to get into k-native and functions here and we have here A good chunk of the red hat k-native serverless team So if uh, you guys want to introduce yourselves We'll start with roland. Yeah, sure. So My name is roland hus. I'm based in germany. I'm The open ship serverless architect, and I'm also in the k-native to see taking oversight committee Yeah, and I'm k-native itself. I'm also working on the client part itself. And yeah, that's me and I hand over to spinnyak No, my name is spinnyak. I'm based in july republic in berna uh, i'm into the working in this open service team and most specifically on functions part And I'm as well a maintainer of project k-na, which is something a little bit different And I had it over to look Hi folks, my name is luke kingland. I'm based in tokyo and I'm also working on the functions aspect of open shift along with roland and spinnyak Wait, what what time is it where you are? right Yeah Okay I see I'll just introduce myself as well. I'm the I'm the random extra person in the room. So I'm the marketing Person, so I'm simon c grape. I'm the product marketing manager for open shift serverless And I'm here to support the team as the questions come through. So please engage with us ask some questions As roland spinnyak and luke go through today's session. Would love to hear from you um And I understand you've got some slides to get people started here Yeah, sure. So actually we have some some slides prepared for you So there's some deduction in the k-native I think just to warm up and then we will go into more details about Different components of open shift serverless and k-native and especially on functions So this is kind of a new thing on top of that. I think this is also super exciting. So let me share my screen um I believe yay Yeah, that's good Yeah, so let's talk about service and by the way, if you have any questions, of course, I have Please ask immediately. We will also try to make this session as interactive as possible So just ask and we will stop for and answer your questions But uh, let's start about serverless with k-native and Kubernetes so first of all, what is serverless? You probably know that guy and you probably also know what cgi bins are if you're a little bit older like me And of course the the sky here is a point actually Is it really more than just just a cgi bin that you throw over the fence and then somebody is Operating your cgi bin in some sense. This is true But actually, I think we can show you today that serverless is much much more than the good old cgi bins and that's the Yeah, the gist of this talk here So we will talk about serverless But we will also talk about functional service and there's often as you know serverless is really kind of a very fuzzy term There's a lot of buzz around serverless and if you ask 10 people about serverless You get probably 12 different opinions about what serverless is And so so we came out of this kind of a definition So this I don't mean this is the whole truth. It's something that we use as our Definition and we also make a distinct Difference between serverless and fast. This is functions of service So for us serverless is a really a deployment model, which means that you abstract away the whole infrastructure Which is running application like for the good old cgi bin where you really do not care who's running your cgi bin Um, so it's really about a deployment model where the servers are not visible to you Which means you also don't have to manage your services service Uh, and so you can really Be have a fine-grained uh billing model. So it means that you can only have to pay For what you really are using So pay as you go so to say And I think this is one of the main benefits if you want to use serverless And it also uses a deployment packaging. So in our case, this is a container image So this uh, but there are other serverless architectures out there like For AWS laminar or microsoft functions that have a different Sorry the different deployment format So on the other hand side, there is a functions of servicing Which is a programming model which builds on top of serverless. So functions of service also So functions get deployed with a serverless model, but it allows you to have an Um Higher level abstraction how you create your application. So like There's the kind of a function signature that you need to fulfill and then you just hand over the function itself without any other runtime And so it also adds you in the context of knet if it also adds the build aspect the system And it's typically used for glue code kind of applications where you connect services together In a clue way, but actually that we have also some a little bit more more detailed Definition so maybe spinning you can talk a little bit about The final grain differences between serverless and Thank you. I just want to highlight what you just what you have mentioned. So basically Uh, how do we look at the serverless in internet and functions? So Uh, if you have your container with your application and you just want to make it serverless So we call it call this serverless containers. So basically you you can just package your Your container as a serverless container with knet. So it can it can accept cloud events It can accept htp requests and all the all the all the all the application code needs to be developed by you Or all the incoming handling incoming requests, etc But on the other hand the functions provides you With some with some glue with some logic that that basically is managing the handling of the incoming events and incoming requests for you But in the end the function itself is Is is packaged and deployed as a container. So basically both approaches Deployed on the cluster are behaving the same way But it's really the matter of the packaging and on the on the setup and on on the glue of the of the functions, for example So this is really about the This is kind of a general remark about serverless in function. Now, let's uh to set the context wide Let's have a look now to What is k native actually So k native is a serverless platform where you can run and operate your applications in a serverless way And it's really as maybe mentioned. It's really the deployment model. So it's for and k native is really the platform underneath that model Which allows you to deploy and manage modern serverless workflows So this is the kind of the self definition of k native if you find that on the web page for for for k native itself And k native actually consists of Of two components. So there are two major components there So one is called serving the other one is called eventing and serving is all about a request driven model That allows you to run your containers um in a very simplified way and it One and allows you to scale your applications automatically based on The load that comes to your service and this includes also scanning down to zero, which means That if your application is not used at all, you don't have to pay anything for your application to be deployed I think this is one of the the big things with the serving component. You know, on the other hand, there is also eventing a part which provides your common infrastructure for consuming events from external sources to route them through the system and maybe eventually add up k native service which Reacts on events from the outside. So it's perfectly speeded for eda type of applications or event driven architectures These applications like that. So this is the high level view on that But I haven't really talked about what connective actually is. So connective is an open source project, which has been started in 2018 by google and it's a community driven project which is has a lot of render backing so they're big companies which Which behind connective and supporting and sponsoring connective development there But actually it's open totally open for the community You find here the coordinates for the project on the on the slides. So it's The code is on github. There's a dedicated website for that and actually As I mentioned there are big companies like google reddit, ibm, vmware and so on that are supporting connective and Recently in the last year some changes happened for the governance. Actually, we have different very open governance, which is a little bit similar to q and s but So it's similar that has similar kind of committees like a steering committee and the technical oversight committee Which is for the general direction more for the technical direction But it's not it's important to mention that connective itself is not based on the foundation. So it's still kind of project which is Which has a trademark and the brand and this trademark is still kind of owned by google But actually we have also so there's also now a trademark committee which deals around with all this trademark and brand related issues And this seat is given to the biggest contributors to the the project From from, um, yeah, otherwise there are multiple working groups which Which meets regularly And they are working on certain aspects like serving, eventing, but also for the client and other stuff like here We have a six weeks release cadence and the current version is 0.22 But we are working hard to get 1.0 out this year And actually all of the work that we're currently is really focused to get to this 1.0 version and uh, yeah The next question might be how you can try out connective and actually there are several options, of course. So you can run Uh, connective of course on any Kubernetes cluster I forgot to mention that connective itself is based on top Kubernetes of course a very important thing, but Sorry for that, but yeah, you can run it on any vanilla Kubernetes cluster So you can run a mini cube on kind you can run it in the cloud on any Kubernetes offering How to do this is described on the connective website But there are also commercial offerings that you very get really support for that Where these systems run connective workload payload for you One of them is IBM cloud engine Which allows you to to run connective services directly into the cloud like you can do with Google cloud one Which is running on GKE So these are really public cloud offerings So there you have kind of a virtual cluster connective installation and you don't have to worry about the management And then we have ready OpenShift serverless This is the project that we're working on and this supports all connective features You can run this really in In different ways. You can run it on-prem. You kind of run it in the managed way in the cloud So you believe this kind of all this hybrid cloud features there And you can get full support So if you're so you have really a plenty of choices that you can choose from to to to use connective Trying it out. So I think every of these offerings have also free tiers. So you can try this out as well Good so this is actually kind of the introduction. You can if it's safe Are there any questions until now about this kind of overview before we dive into more details Um, there are actually a bunch of questions From from all kinds of things one of which I don't think we can actually answer. So yeah, um the Um And of course some people I came here already being k-native users with technical questions The um I one of them actually wants to know and and we might want to uh fill up Follow-up and chat wants to know actually how to enable Auto TLS and k-native um Because they've had a lot of trouble doing that. They say, uh I config cert manager Config map doesn't seem to understand k-native I Yeah, okay functions or events Yeah, so actually this is a good question. So can I if it's safe does not has any TLS related features? intrinsically, so this works together with uh some external services like with istio. So you can enable um TLS like like features like mTLS for example with uh leveraging service mesh for example thing, but otherwise I think so TLS should be supported directly. So there is documentation around that. So maybe if this is a very um An issue with it I really would like to to ask you to come to our community channels, which is uh on slack or open a GitHub issue on there We are there to help and um Because it's it's hard to analyze it, but in principle we have this kind of features here as well Yeah, uh, where's that community channel? We'll go ahead and put it in chat. So it's on core as uh, not k-native.dev slack, sorry this was the wrong k-native.dev. So actually we can paste it then in the in the chat as well Yeah, so somebody could do that the um So, um Next question is um does The serverless containers um in k-native I have built-in auto scaling word. Do you need to also add? I in external load balancer good question So, yeah, actually um I don't have to answer all the questions spinig or or look if you would like to answer, but I can I can do also answer it so jump in So so yeah, so actually we have of course we have auto scaling intrinsically you can have so because it's one of the core features Actually, and the good thing is that this auto scaling is based on consumption. So it's not so much like, you know The oh, it's a lot of auto scaler from kubernetes, which is by default based on cpu and memory consumption but actually If it's ever the very sophisticated auto scaler We will see them in action in a minute when I go into the llamas And this is really counting concurrent traffic by default and looks like and scales are based on concurrent traffic Which means it really counts how many requests are in flight and then depending on the number And depending on the configuration that you have chosen either scales up Your pots if you if it's longer if it's higher than A certain threshold or even if after a certain amount of time no traffic comes in Then really scales down the number of replicas for your application to zero yeah, so Long answer to the question and the short answer. Yes, of course, we have found reports Roland just a really quick question one one we get asked all the time around this Obviously one of the real key value propositions of this is the scale to zero Which from an infrastructure person's perspective is great because it frees up resource so you can over-scribe your hardware Sweat your hardware a little bit more than normal um You know in the real world with your engagement with with customers and what have you that auto scale functionality Are you seeing how many people are you seeing out there? Actually leveraging that scale to zero versus maybe scaling back to one or two instances Are people adopting that new capability? Yes, actually, I think this is one of the the key features why people are really trying to use it, especially when they have overall very Inorganious traffic shapes or like a burst traffic, which happens So let's imagine you have a Christmas card shop which only sells Christmas card at Christmas and not on the rest of the year Then of course it would be it's super awesome If you if your application does not Sleeps for let's say 11 months and then fires up for in the last month So this is an extreme example of course, but but you got it So if you're if you don't have a constant Traffic shape, which means constantly flowing in in traffic Then it's super convenient also to to scale down to zero which is For example, especially useful for For startups which want to minimize the risk for For for the costs for example So that you can try out things and if it doesn't work out it just don't cost you anything So I think this is one of the The big key features here. Yeah, I would like to highlight that basically if the if the deployment is scale to zero You don't have to be afraid that your incoming requests to this to be scaled down Service is is lost because skating if anything basically holds the request and scale Scenario deployment to one and then forwards the request to your application So some of the requests are are lost during the scaled down phase awesome The we're getting some other questions, but I think some of them are stuff that you're going to have in these slides One thing I will interrupt and say actually because because somebody's asked this a couple of times is a question We're not going to answer somebody who's asking questions specifically about The google cloud run and we This is the red hat serverless team. So we can't actually answer questions about google cloud run We just don't know that's Their thing and I encourage you to attend a google developer relations session Where they can't answer those questions So if you want to go ahead and yeah Let's talk about serving so actually as I mentioned serving is one of the core components of canada And it's really about routing your traffic To your application then scale to zero. We already talked about that a bit and also to have revisions which means that you can have snapshots of applications and that allow you to roll back and also to split your traffic among Different versions of application. You will see this also in a second in the demo Um, yeah, what are the concepts actually as mentioned? it's really demand-based autoscaling which means that it's really based on the real traffic that gets in and so you get a really direct relationship between the the usage of your service to the Resource of the disconsumes So the number of pots that are running It separates the code and the configuration of the code, of course is in the container image the configuration is What what you what you define here and also that the configuration is really kind of snapshot it like here and it's kind of opinionated it's actually this actually sorry the here The the slide a little bit older. So like some of the restrictions already be relieved. It's actually this and It's so one of the big restrictions still that it needs to be a stateless application so that you can you have this kind Kind of a high dynamical Autoscaling there, but actually you can have multiple containers within your connective service these days Then it has also rich traffic split capabilities Which means that you can enable custom rollout strategies like canary deployments and plume deployments and this is Super convenient because you have different versions of your applications running there You and then you can really find granularity define how many traffic goes to which version You can try it out and then scale it forth and back as you like. So all the kind of missing strategies from Kubernetes Core Kubernetes you can have this with credit as well But one of my main so actually this is how it looks like in In code so can if itself is implemented with CRDs on top of Kubernetes. So it uses the Kubernetes extension model and then you have the high level service objects here a custom resource that is exposed to your user And with this you can You have kind of implicit objects that are created on behalf of Of the service so the route configuration and revision are typically not created by the user or not exposed to the user Although the the route for example could be still be available to create on your own But typically the only interaction with this creative is through the service And as you see the revision is kind of a revision history and the configuration itself is really the head of the revision set So any time you change the configuration either via service or directly then a new revision gets created for you on behalf by the back end controller Okay, but I think one of the main so but one of my favorite things is really is that it's really In in addition to all these benefits, it's really also a simplification of the deployment model itself So now here you see a physical Kubernetes deployment for a very simple application And you see if a tons of YAML file, which is quite huge You need also beside this deployment You need also a service in front of your deployment Kubernetes service for that and then if you want to access it from the outside You need an Ingress and now compare this to the To the actual version of a creative again, sorry, it's an older slide. It's already of course v1 the api version. This is my bad But actually you have also a service. Unfortunately. It's also called service I think this is historical reasons, but it's not a creative service. It's a creative service So the kindness is same but the api version is different And then you can omit all the other things which are here in very light gray like the number of replicas the selectorals and The name of the container and this really shrinks down considerably and you do not have to specify all the other resources to So I think it's now time to demo and actually Okay, wait before before we get into the demo Um, can somebody uh, because of some of the questions we're getting a lot of questions from people who are new to the whole concept of surveillance So does somebody want to give the like two sentence elevator pitch of What you would want serverless um, like k-native for period Just just the 30 second because because it looks like people some people are lacking that okay, so Thing is that it allows you a simplified deployment for applications that you get a flexible auto scaling based on traffic And you will see it with eventing. You have also full flat. So it's a perfect platform for creating Event driven architecture based programs. I think this is that so actually it's really about um You don't have to to really to worry about for example a number of replicas that you want to have for your application this gets auto scaled automatically and this is I think I think we believe that the Kubernetes the k-native auto scaler is um, it's a director approach to auto scaling Then the horizontal put out of scale at home k-natives Yeah, I just want to mention that the difference between the hpa, which is in Kubernetes is basically that hpa by default scale scale based on the cpu and resources. So For example, your application is consuming a number of memory and some number of cpus, but some deployments Needs to be scaled based on the incoming traffic. So so this is where the k-native Autoscaler is different. So it scales based on the number of concurrent requests So this is this is beneficial for a lot of a lot of deployments. I would say Yeah, and there's there's the clue in the name there actually the auto scaling you don't need to worry about You know taking into consideration about How are you going to scale your application up and down? It's all handled for you, which is which is pretty massive This is great from a developer's perspective from an IT ops person's perspective Where the rule nice, you know again resource underlying Infrastructure savings there if you've got all this this is state of applications or serverless applications that are scaling back to zero Obviously, they're not running concurrently all the time. So there's massive infrastructure savings to be had resource savings that is to be had when you when you're scaling these apps You know back to zero and up as needed as well Okay, let's go ahead and go to the demo. Hopefully that'll also give Yeah, actually, this is I think I hope it makes clear the demo actually This is also shows you that it comes with a quite decent developer experience for for k-native sector It's also made for developers So k-native is not only for operation that you can save costs But it's actually one of the main focuses really to make it easy to get your your payload your applications onto On the big cloud and I hope I can make this clear with the demo now So what you see here is I'm using here a comment on the top of this Of the screen you see a watch on the number of pots that are running the cluster I'm running against an open shift cluster in the cloud Which is has an open shift serverless install So one of the product base of k-native, but actually this is just pure k-native itself So everything I'm showing here works perfectly fine with a with a pure k-native installation of tov k-natives So what do you see I'm leveraging here? Here A CLI which is called k-n. This is the k-native CLI But you keep this keep in mind everything that what I'm doing now with the thing With the on the demo you can also do as well with younger fights itself and cube control for example But actually this k-n gives you a really nice use experience And what I'm creating now is a service and the only thing that you need to to provide is As parameters of the name of the service And a reference to container image And in this case example, this is a very simple rest service Which is just generating random numbers. So it's just uh, if you when you curl it We will see it in a second. It's just returns you a random number the application here itself is based on quarkus which is Jabra one time framework, which allows you extremely fast startup times and extremely low memory overhead This is also done by reddit and you can use quarkus. It's really super amazing But actually there are probably tons of other talks about that. Sorry. So let let me create that You see that Actually for with this CLI you vary it synchronously until the service is up and you see at the top that it It gets started you see other that this goes very quickly And now you have running this on your cluster You get automatically an ingress created for you in the service net and you get out the The url here and of course you can just use curl I'm just using this url and put it to into jq for formatting issues and you just get back here the Random number and you see the numbers so super sophisticated Service here, okay So you have running all the service and now if you wait long enough and if I do not send any crowd Then you will see that this podcast scale down to zero again But to make this a little bit more quicker. So I'm making making update on that. So I'm just I have also an update um What I'm doing here now is Then I'm updating the current service And then I'm setting some auto scaling parameters You have a tons of auto scaling parameters that you can use to influence the way how the autoscaler Acts in this case. I say that I want 10 concurrent Request at maximum which means after one pot serves 10 requests Then another pot gets scaled up to serve the other request. So if if they're so rough roughly, this is really roughly because there are some some other parameters like a burst capacity But I don't want to much detail But roughly if you have this concurrency limit one 10 and you have 100 requests coming in Then you get 10 pots which serve these 10 requests and Then I have also another parameter which is called auto scaling window Which is just the number of time which is used for calculating metrics and actually in our context It means if there are no requests coming in within six seconds six seconds Sorry, then the the pot will scale down The default is 60 seconds and you see already in the top here That the pot is going to be scaled down because I'm talking so much Yeah, okay um, and then I'm adding here some environment where a delay of 1,000 seconds at 1,000 milliseconds, sorry, which means This has a meaning for the application itself and just means that the request there's a sleep for 1,000 seconds While certainly the request is important to show you the upscaling behavior But otherwise if the request is too short, you you don't get to a concurrency of 10 Okay, let me let me do the update And you see for every update Um, you get again a new deployment. It's just to verify that the update really works. You can look in all the revisions Then up there and here you see that the original one was revision one Then a new one is created, which is revision two and this revision gets all the traffic here The moment and if I'm I'm going to make a curl I get just a new version But actually I'm trying to to run now a mini low test on that So hey, so see like common which makes in that case 50 concurrent requests With that and let's see how what's happening now If I'm running that You'll see that immediately tons of containers are really scaled up And this is something also you have seen that it's not something that it's one after each other There has been multiple pots spanned up immediately together And this is because there's called very specific is a way how to detect a color panic mode Which means if there are the increase of requests are very quick Then you just create more pots in advance So in anticipation that there are more pots than only that you get more requests like that So yeah, but actually I think I already have shown you the the key thing So I have something more but actually I think we already I'm already a little bit Late here. So but you see also that now When this is done here, then all your pots going down after six seconds When no traffic comes in Okay, so this was the the surfing demo actually I could also of course you can also describe the service here directly you can Sorry service described And you get all all the information you get the URLs you get the the split so you can also split up the traffic among different versions So I could split between all the newer version there. So but actually Let's stop the demo here This is great Roland's your Christmas Christmas card shop is meeting demand. You're not losing any customers Yeah, you scaled out. So yeah, brilliant Yeah, that's of course one caveat because this course that it uses a latency so if you start from zero to one you always have to Um to wait under the content the first container starts up This is so called cold start latency, which is always kind of a problem We we try to minimize that so This course of latency has different components. So one is of course the application itself It needs to be start fast This is the reason why we use quarkus but because quarkus is super fast So if you look into the locks or if you measure it, it's really in below 10 milliseconds But then you have also other overhead like the cube let's start the container image Then you have the scheduling process and so on so forth. So it takes a little bit, but it's more in the second range Okay Sorry that um, so this was the demo certainly if uh are there Josh any questions around the demo so actually you're a mute, I think Josh, I can't hear you How did that happen? Okay, uh, we have quite a few questions, uh, queued up Um, and I don't think they're necessarily related to eventing. Um I but a bunch of them are related to scale up and scale down Um, uh, one one first thing is a sort of general conceptual Couple of general conceptual questions. Um, and I'm going to start with the sort of biggest one, which is Um, people are not understanding the principle of scaling up and scaling down a serverless thing That is when do you get additional instances and when do you not the kind of thinking about it in terms of hpa? And I know it works differently So do you want to explain that like when do you get an additional server serverless container say yeah So the the difference to the hba is that uh that the hba So you can use the hba with custom metrics as well But by default you are using cpu and memory but cpu consumption and memory consumption are more like like indirect measures of your traffic So it's so this has the assumption that if a traffic comes in that your application Uses more cpu when you have 10 requests compared to one request when it persists Then simultaneously the same as for memory So but the the scaling trigger the scaling event for from the hba really goes on this kind of indirect metrics Which of course gives you a not so that domestic behavior like you have if you really scale based on the actual number of requests or The the duration of requests on this case is really concurrency, which means parallel requests in flight And so The outer scale is just I think it's more sweeted really Maps directly to the really to the traffic and to the load that you have and of course if you Think a little bit further also maps to the revenue you get because the more users you have in parallel But this of course is a little bit far fetch maybe yeah the so A follow-up on that from slack Is we've got somebody in slack who works in helco sector And their concern about using serverless is Delay to basically start up a new container when you're when you're auto scaling Is that Something you can talk about in terms of of latency and and you know that is a work workloads that are latency sensitive and how you'd handle that For serverless auto-scaling Yeah, sure. So actually you have So I think we are we are rare but actually spinning or look if you have any ideas about As well, but actually I think the the one Thing that you always can do you can set the min scale to one, which means that you Just prevent scaling to zero. Of course, you lose some benefit of them of saving stuff But but if you if you know that you will always be of traffic and that it's super important that you have a very low latency For every request then you set the min scale to one and then you have always at least one point running for application So we can avoid the cold start issue All together with that but then we also try to To improve the cold latency on on several layers. So we we're looking into kuban itself. For example There It has a very concrete example very specific sampler that there are certain Triggers within the kubelat which probes for every second something But but of course if you probe every second You cannot get to sub seconds behavior So this is one of the one of the things what we look into But we also look into more things how we can improve creative But at the end we must be clear. It's a container based system. So you will always have an overhead which is probably still higher than You would get for proprietary systems like lambda or any other stuff which do not use containers But on the other hand you have the the whole benefit of container that you can run everything in the container with serverless on knetif But not only specific stuff where your API is suited for lambda. For example, it's only a handful of Of programming language that are supported but actually with the service you can provide everything. So actually We are aware of that. There are some answers Uh, how this can be improved The other thing as well to point out is it really comes down to the underlying application how it's been developed I mean an application that takes 10 seconds to start for example Uh, uh relevant it's always going to take 10 seconds to start So that's why you really need to look at your developments of the application and uh roll and touched on it there before actually Um, I mean red hat here with what the java runtime framework which which that was built with the you know with with this is mined A lot of it is low memory footprint and also very fast start times So, uh, you know, if you've got that older legacy type application that does take maybe 10 20 seconds to start even in the service environment It's still going to take 10 to 20 seconds to to to scale up and start So that's why you've got to start looking at the development of these applications with with speed and performance in mind as well Sure You're definitely definitely the broad one to start a web sphere as a serverless application And you can also be a little bit conservative on how you define the concurrency level, right? so if you need there always to be overhead for your Ability to answer requests quickly you can set an aggressive concurrency target that starts spinning up those new service instances Far before you hit the limit of what the currently running services can handle yeah um Here's actually sort of a bigger concept question, which was Uh, we have somebody coming in who's actually used who's actually familiar with amazon beanstalk Yeah, and they wanted to say can somebody compare I sort of work dispatch and and load systems like amazon beanstalk to um k native and kubernetes serverless Do these do these fulfill the same function? What's the difference between the two aside from the fact that that beanstalk is vm say and yeah k native is going to be containers To be honest, I'm not really Knowledgeable about beanstalk. I know but actually I cannot make a qualified answer here. So maybe somebody I just want to sorry. Sorry. It's just I don't want to To tell something wrong here Oh, I get that. Yeah Yeah, okay Okay, so maybe we continue but actually one question to spin it now. It's uh because I have eventing. I think we only have Only that many minutes time. So maybe we can just start with functions first and look on the eventing or I think sure. Yeah. Yeah, because I think factually it's super exciting and it's really super interesting and it's a new thing So I really would hand over now to the spinning for that. Okay So can you actually score those slides to the functions? Yeah, I kind of let me let me score this It's actually come back to the event sources eventing when we have time, but actually I just want to Let's talk a lot about eventing because I thought also we have also quite nice demo around that Yeah, so so around Let's say describe the the k-native the two main parts, which is the serving and and eventing and We are we are building on top of that because we would like to provide extended let's say developer experience. So With playing k-native you still need to create the container on your own still not need to package it and do the deployment But with functions we would like to get rid of all of this stuff So we would like for developers just to write the business logic and very simply very simply Deploy deploy this dysfunction This function gets deployed at the end at the end as k-native service So it can benefit from all the k-native service from the auto scaling from the configuration or from the eventing eventing part so our functions stack is Is based on cnc of build packs. So if you are not familiar with build packs, it's a cool technology. So I recommend you to Check it out. So basically very shortly it allows you to package your application and Create a OCI OCI image from firmware application. So there are Let's say builders and stacks Which where you where you define the type of your application and the build pack is then doing the the inspection of your code and based on the actual actual application based on the language It it builds the application and produce the image For our functions, we support currently co-archers runtime null js bring good and go lang and we are And we are releasing python support very soon Uh, yeah, this is uh the functions are opinionated opinionated. So basically We provide you with with a function template where you just need to when you just need to implement your business logic And then we are calling the the build packs to package your application and deploy this as a k-native k-native service. There are two there. There's one more in important aspect, which is that the each function can respond to HTTP, which is to play in HTTP requests as as we as we saw with the with the serving demo Or it can respond to cloud events. I suppose that For one, we'll be talking about cloud events later in the editing section, but basically cloud events are specification Where you can basically it's a it's a wrapper around your arbitrary data And uh, they are specified in a unified way So you have you have this kind of wrapper the cloud event, which is some metadata and the data itself So then you can send the data In the system unified way So, uh, you have components that understand cloud events and they don't they don't really care what's inside or inside the data So you can for example have a kafka kafka connector Which listens to your kafka topic and converts the kafka kafka message from the kafka topic to the to the cloud event And then it can be sent through the for the system unified way to say for other services, right? Redis or I don't know whatever whatever else Can come to your mind. It's very simple. It's very simple Specification so our function can respond to this cloud event as well. So this way we can enable Let's say a real event-driven architecture and development Or what if you can go to the next slide? Yeah Yeah I should interrupt you with a really quick question related to this. Somebody actually wants to know whether this supports kafka versions 2.5 and higher I'm not sure Front of my mind what's currently supported version of kafka in open shift, but basically what's Open chief server less is uh, is by default You can install it through the operator hub in your operator open chief instance So whatever is you know, you open chief it should be should be supported if I'm on mistaken So so so to really so to really You enable the developer experience We we have created a plugin to the kncli which everyone showed before And with this kncli with this funk plugin, uh, you can you can manage your functions You can build them deploy, etc So maybe I'll show the demo show the demo for for now Okay So let me actually share my desktop Actually, why are you doing that spinny just just to mention to the folks out there? This is currently in dev dev preview. Uh, this is currently the preview. Yeah of open chef serverless Soon to go into tech previews. Yeah. So so this this project is not part of k native But we are building on top of k native and this is obviously as well open source project So as I mentioned, uh, we developed a funk plugin On the on the kncli so as you can see we can do several several stuff. So actually let me let me just quickly Create the same application that Roland did before so to to print the Random numbers. So all I need to do is to run kn funk Create and I need to name my function. So I'll name it random and uh, okay, let me let me just show you the help for the For the create command so, uh, as as I was talking before each function can be triggered either by HTTP or or events So we need to specify that we are creating the function default before version is HTTP And then we need to specify the the runtimes So the note runtime is default one, but if I want to develop the application in different runtime, I need to specify it in the command line so k and funk Create random and I will create it in the note runtime and it will respond to HTTP. So I don't have to do anything like that As you can see, uh over here My project was generated. It's a it's a template as you can see it's a pretty standard note project. So, uh, I have some some package json with dependencies And then I have index.js file. This is the important stuff. So basically this is the place where you need to put your business logic We and only thing that the function needs to do is to Is to export export the actual actual function that needs to be that is to be invoked in this case The invoke invoke function is being it's been called. It is handling differently post request or get request so let me just delete this one and I will click quickly just put the function implementation To save the time. So as you can see all I need to do is is return some random number and It export this this function To to deploy this function All I need to do is to run k and funk deploy and I specify that I want to deploy my Random function What's happening right now at under the hood? So We are calling the build packs to To analyze this this application. It finds out that it is not application. So note builder Is is running and it will produce Container with your note application and it will get deployed on open shift as a k native service So as you can see the image has been built now It has been pushed to the registry and now then it is it has been deployed on open shift Before we look at the application There is one important aspect which is this fun dot devil file Which is which holds basically the configuration of our of our function So over here we can see that the image that was produced by this function image digest The trigger type. This is the builder So it was the note builder was used for the for the build packs and over here We can specify environment variables or later we can specify secrets, etc. That should be used by by the service As we can see the function was deployed as as k native service. So I can I can curl this And we can see that it returns it returns random numbers So this was this was seen very simple very simple function triggered by htp and I'm not sure if you want to continue with the we don't have antique parts. So then we can show the the rest Yeah, sure So actually let me jump over. So actually thanks a lot for the function. So this is really the new thing So it's worth mentioning that it's really this is not k native. This is on top of k native. This is done As part of object serverless. We are working Together with the upstream community to find a solution for for this function part which allows for for building stuff But I think this is worth to keep in mind that it's just part of but actually you can actually if you go to the open upstream upstream Project a bow zone, then you can still use it with k native, of course. Yeah, this is absolutely So let me share my screen again So share screen Oh, sorry. So here we go Okay, um, yeah So actually we jumped over the eventing part because I I think it was important to see this function thing in action Because this is really amazing to see how how you can build stuff and directly also not only deploying but also running and Creating things, but let's talk about eventing. What what's what's k native eventing actually so this is really about a a universal description delivery mechanism for events and as spinac mentioned It's based on cloud events. So this is one of the the data standard for the for the for events and Cloud events itself is a cncf standard. This is important So it describes more or less the format how events or the envelope of these events how they are Transported and they but the payload itself can be arbitrary data. So it's just about that You have really a common way of set of headers. You have Different transport protocols that you can use for for cloud events But it gives you a kind of open standard that you can connect to your eventing infrastructure And this eventing infrastructure in cave is centered around the concept of channels the different Backends for these channels so that you can have in the simplest case and we have a memory channel which allows you to to transport your Your events through in memory Things but of course it's not it does not have any guarantees for your event delivery You can use also Apache Kafka for for the back end But also other systems Um Then uh, actually there are two other the channel is one of the core concepts and the other two concepts is a source and a sync The source is actually the way where the where the event comes from This is uh, how you integrate your Eventing sources into the system and actually for kinef a source is more like an adapter It's not really the original of the event, but it's more like the part that translate a custom event format into the cloud event format This is this is the kind of way how how kinef event sources typically work And then these sources at this event that come out of the source are routed to a sync in some way and there uh Before we talk about the routing. Let's briefly talk about the sources So it's um, there are as I mentioned there are kind of adapters, but there of course they can also create events on their own And they are typically declared by a custom resource. So every source has also an own custom resource type or custom resource definition and this is um evaluated by a source specific operator in the back end and Then you connect this event source to the sync and then every time an event occurs for for this source then the event is is put into the into the sync itself What source are available? So actually there are four sources that can come out of the box together with kinef So every time you install kinef or use kinef you have these four sources available And there's a very simple source, which the pink source Yeah, you guess what it does. It just emits cloud events per decal decally after a time timetable or scheduler to just a chron expression api server source which Just converts events that come from a Kubernetes api server Into a cloud event and then moves on to the to the sync You have two general purpose Kind of sources. So the sync binding is kind of a source So people so you can can view it as a source as it connects Navi terry pot to a sync the same as for a container source But beside these Out of stock sources is really that you have also custom sources like for github kathka source or also more More general purpose sources like a camelette. So camelette is another technology It's based on apache camel and apache camel k And this allows you to reuse all the camel component not all but Consider a set of camel components as sources. So if you don't know camel don't worry. So actually the camel is a An enterprise application integration framework that allows you to connect to service to different external systems There are around 300 these kind of components or connectors if you like to to call them Like you can connect to f3 aws You can connect to ftp to all of these this kind of External systems and the camelet binding allows you to transform these this camel component generated events into cloud events So, um, yeah, let's go quickly The way how you can connect such a source to the service the service you already know from kathative serving But actually there are three main ways how you can connect a source to a service The first one is a direct way. So you connect directly the sink Which could be a connective service to the source so that every event that comes from the source goes to the service or to the sink Of course, this has some drawbacks because you don't get any advantages from a messaging system. You just If you didn't so there's no queuing support. You don't have support for back pressure. There's no And when the source goes away, you might miss service When the sink goes away, you might miss events and so on before so this is really only just for testing, I would say A more advanced way to do these things is to leverage a channel So you can have a source so a source can Push events into a channel and then your sink connects to the channel We are a subscription, which is also a city. So everything what you see here on the diagram are crd. So custom resources And so you can build up arbitrary complex event topologies with that Of course, this kind Can get confusing. So there's a third way how you can do that, which is leveraging a broker A broker is I think the most advanced concept how you can connect stuff together Again, you have a source which feeds into a broker and then the sinks also Subscribes but not the subscription but with a trigger to the broker And the benefit of a trigger is that it also allows you to provide a filter so that The sink only gets the events of a certain Type that the filter filters on so in this picture here The orange events are going to the upper sink And on the on the lower hand side, they are the The green one Filled it here. So this is a also easy way how you can Create an eventing topology And of course a sink can also return a cloud event in this case And if this is the case if the return value of the sink, which is typically also used via hgp It's a cloud event then it's feedback into the broker and gets rerouted again So in that way you can really make a very complex topologies like that Roland just a quick question around that. What's Could yourself or the Local spinning there give an example of where where would you use eventing in the real world? What would what would a real world scenario look like? What sort of events would would trigger this for example? This triggers for example is we hadn't got a demo for that actually unfortunately not enough time Example would be let's assume you have an api server source Which emits all the events from api from the api server of Kubernetes But then you have a sink that only wants to react on Events that are about deleting something or adding something or modifying something This can be done by specifying a trigger a filter expression to say okay. I'm interested only into this delete events And then it also only get these delete events and can do for example some cleanup stuff So you could imagine a sink that is there sitting there looking for Delete events and then because you need to make some custom cleanups For that of course the other extension mechanism for Kubernetes that can do that, but but yeah, and this example this could be happy Maybe you want Maybe I can I can show the telegram demo, which is like Very quickly showed all this all these benefits. It could be like in a couple of seconds. So if you let me share my screen I can yeah, you can you can do but I think we'll be nearly out of time. I'm not sure how much we I think I think we'll put a little bit extra. Yeah, go go ahead and share it and in the meantime I'm going to give you guys some questions Okay So go ahead Okay, so Let's say let's say You are asking about some real case scenario. This is not a Such a real case, but basically imagine that I am in a telegram conversation with some telegram boards And I'm chatting with this board and once I'm Every time I set a message to this board the message get analyzed And if there is a photo in this message Uh, the there is done some analysis on this photo and if there are some Faces on the photo, uh, I will get I will get number of the faces and the people emotions And they'll try to make a guess on the age of the persons So we can we can easily easily implement this with k-native service and the functions So if we look at this at this developer console open to the console, we can see that on the left side We have telegram source. This is the cable k telegram source It basically listens to the telegram conversation And it is sending every message to the to the k-native eventing broker, which is the component in the center And then we have three functions which are k-native services connecting to this broker The first function receives receives the the telegram message And if there is a photo in the message it will respond with another cloud event back to the broker Second second function will take this cloud event with the with the photo It will do the analysis and reply back to the broker And then finally a responder function will take this this final result from the analysis and it will respond back Back to the telegram telegram chat. So let me just quickly show it to so as you can see right now The functions are scaled to zero. So I will just type hello to the telegram world And we can see that the telegram source sent Send the cloud event to the broker and the receiver function receives receives the message But it doesn't find any any photo in the in the message So it responded that there is no photo and our responder function Responded back back to the telegram chat with this with this nice message. So actually, let me send some Some photo as you can see I have some prepared photos in the here. So I will send it to the teleconversation And this time you can see that the photo was analyzed by the first function So the second function the processor function Basically does the analysis in this case it is calling Microsoft face API in the cloud So and responding back back to the broker with the cloud event that is holding this this data and The last function basically take to the cloud event and responding back to the chat. So This this way you can see the event flow through the system So it is all happening through the broker and And the last aspect is that these these lines are the connections to the to the broker and with each each Trigger you can specify the filter as so as you can see here We are filtering on the type of the cloud event. So each client cloud event has some specific types. So This way we ensure that the the each each function Just take the specific cloud event with a specific specific type So I hope that this gave you guys some Seeing the benefit really is that only those components are running that are really needed I think this is one of the big things that if you for example have 1 millions of text messages without any image The image processing thing would never pop up because it Doesn't have to take any image like that. So and I think this is one of the benefits of this eventings Are you all ready for more questions? Yeah, sure cool, so Another question actually we got from two different two different viewers is logs If you need to examine logs from Say triggered functions Um, where do those logs go? Yeah Yeah, so these logs are just logs coming from from a port. So actually this Happened in the pot logs and of course you have techniques how you can collect all these pot logs into a central System where you can collect all these logs and then and And then examine them. So actually this it's the same question how you can collect logs from pots from variable applications so there's no difference between serverless locks and regular logs, so um Yeah, you can also of course look directly into the logs, but but As you have seen that you have this kind of very dynamic way that's things scales up and down So it's really I recommend it that you collect these logs and I can controlled Logging subsystem like lock stash or something um Anybody feel like comparing k native with open whisk Which is not a name. I've heard for about a year. Yeah, sure. Yeah, actually, um, I can can can try it So actually, yeah open whisk is uh really about odd functions open whisk is not really container based and so you can um, you can deploy functions you have a runtime behind that And the benefit of the benefit actually one what open whisk really Can do as far as I know is really they have kind of a pool of workers that are really hot all the time So you they're really optimized for code stopped time. So you can really Directly deploy that there, but of course, you cannot run arbitrary workloads You cannot choose your your language programming language like I'd say or you even for example serverless also What could be possible theoretically is really that you can migrate your old applications to a serverless model by just putting them into a container image, but of course there needs to be some there are some constraints and Something what needs to be some for example your applications need to be stateless But theoretically this is of course perfectly possible but for open whisk you have to adapt to the open whisk model and to really to To change even your application there. So this is about my my my brief understanding Yeah, the benefit of open whisk serverless is basically that you can have both functions and both serverless containers On the on the same basically infrastructure at you kind of you can handle them the same way okay, um We actually got another one. So, um, one of our viewers Well, you've been doing this presentation. They have an open shift cluster and they have tried to install Uh serverless on their open shift cluster Um, I think it's actually okd because they said they tried to install k native Um, and it's not showing as ready because they need to do something else really No, this is uh Then a buck I would say because actually this is a super super easy to install Actually, we could have shown this out in a demo within two minutes Actually, this is something You go to the operator life cycle manager to the catalog you select open shift serverless And then you have to um, okay. Sorry, then then you have to follow the instruction They have called and you need to create one of k native serving um resource I think But actually it's just we are all very well documented, but um I don't think there are some extra magic hidden step behind that um One of our community members here Wanted to know when kafka broker is coming to open shift serverless. I thought it was already there Yeah So there are two ways actually what we already have is a broker that is based on kafka for for For the back ends actually the for on a kafka channel So the broker itself so we have the the benefits of kafka there, but there's also Not only plans, but actually there's active work ongoing to have a dedicated kafka broker Which really does not need to have a channel in between. So you save one translation step so to say in between And uh, but I cannot give any ETA for that except that it's really On the rise and so it's also to say Yeah, and and so what two weeks ago week and a half ago we released serverless 1.14 That's now got the uh kafka kafka plugin included in this as well to So yeah, so actually we have kafka ga for shift servers for the channel But they are still working going on so actually we already can provide kafka as a backend system for channels and also as a source So actually we also can use kafka A kafka claska as a source for events that you will just pick up Messages from from topics and convert them to cloud events and move them on into And and one last question and this is the hard one Uh key native roadmap Um, like when are we getting to 1.0? Yeah That's a hard one really that's that's too. It's actually I can maybe quickly describe Actually, there's no really an ETA for that but I can describe the process how we try to reach to 1.0 And at the moment and actually we are I think Yesterday it was a to see meeting and we agreed on a set of criteria that needs to be fulfilled to get to ga So we set the ga barrier like for example having 80 code coverage on every component. This is one of them and others and and we want to release Eventing serving the client everything from kinef at the same time. So there's so that we have one Career and kinef release for 1.0 and actually we have the checkmark. We will check everything There I think we are confident that many of the of the requirements are already met But actually we have to really go through them and if this is done then we Then 1.0 is not so far away. So I would I would say even so At least in this this year for sure. So but My gut feeling says, okay could could be summer as well, but I have don't take me for for granted So this is just my feeling but the process really be we're going now through the criteria We have no no we want it to have a super solid 1.0 release that really everybody can rely on But actually you see that kinef itself is already super stable because it's part the foundation of many products Are there already? It's like like opera shift serverless. So you get all the ga benefits already if you if you decide for for opera shift serverless Even if you're based on a Non 1.0 version, but actually the api version is already v1 So actually, yeah, so it's for us. It's really to to just jump over this last Okay Oh Yeah Yeah, so Thank you everybody remember we are in the cncf slack The number six dash cube con dash red hat Feel free to go in there and ask your questions about k-native I'm sure someone in there can help us get you to the right answers So thank you very much for all your questions. So yeah, or in the k-native slack And we will see you at the next office hours are on on cncf slack. Thanks so much Yes, thank you very much wonderful demo and thank you audience for participating. We really appreciate it Thank you. Thanks. Cheers