 So, hello everybody. Thank you for joining me. My name is Giuseppe Bolognore. In this session, we were supposed to be with Giuseppe Brignese, but he got sick. And this is based on why you will have a presentation with two Giuseppes, because I love you. So, first of all, this is a session about K-Nedim. I will talk about what I've learned about the community and where the community is going on and what are the projects and what is K-Nedim, of course. This is something for starters, so people that have never heard about K-Nedim, I hope to keep it as static as possible, but it seems pretty free to us. So, first of all, this is the final presentation. What is this? Many of us have already seen this picture. This is a serverless data center. And server. Of course, anybody that hear about K-Nedim, they think about serverless, which is good, but we will see that there is more collaborative time. So, this is the definition from Wikipedia about what serverless is. Basically, it points out two things I've tried to highlight with the bold character. We talked about cloud provider, so it takes that a cloud provider is something needed for any serverless. I think we can think also about automation, not just cloud provider in the mini public cloud, but also something on premise like the Kubernetes or something heavily automated in your own data center. So, in the reality, of course, the picture is fake. It's not without server, but it's with heavy automated servers. You need a lot of servers for serverless. And a point that I like that is about pricing because one of the reasons why there is all this bad about serverless is because of a different pricing model which can be direct in terms of public cloud provider. We will see that Lambda, of course, is one of the most modernized offering in the functional sense and serverless market, but it's not the only way. It's not only about pricing the meaning of public cloud, but also pricing the meaning of consumption of your own resources. And another very important thing is that it simplifies the way of deploying a red code. So the code is centered into serverless. The whole idea of serverless is automating the stuff in the way that you can really focus just about code and not about the infrastructure. Of course, it's a promise, it's not so easy to keep it, but this is the main goal of called the serverless movement. How does it work? It's very simple, of course. You have an event, you have a function, and you have a result, so input and output. The idea is that the input comes from an event, so hopefully in a simple way we may have a message coming into a queue we'll see the architecture of creating. You will have very simple code, so this is another key point. The function is usually very simple code in the meaning that you may not have the usage of the full libraries of your language, but it's something very simple that has very specific use cases and you may see also that. And you will have a result. Of course, the result is necessary to understand where computation is going on. Of course, this is the main point of serverless. So if you will have a look at the roadmap of, I would say, the history of serverless, we will see that the first tentative on serverless is recognized as AWS Lambda, so the Lambda special. And indeed, if you talk to people on the market, to customers about serverless, most of them they say serverless equals to AWS Lambda. And this is the first experiment actually of this kind of computational model in which you define a function and somebody else puts also the structure, all the infrastructure needed for your function to run. Then we start to gain some kind of community. You can see IBM OpenWisks, which is a project which Red Hat collaborates on, which is the tentative to keep this AWS Lambda style of doing things in a private cloud or on your own to the center, something within Java. I think it's something that you can use basically on your own infrastructure. Then we have other players jumping on the ship. We see Azure Function, we see Google Cloud Function, and we see also that the community is shifting towards Kubernetes. So we see a lot of frameworks that are native for Kubernetes, for doing serverless, for doing function as a service. Of course, K90, which is something really new, I think, in less than one year, it's something that is heavily based on Kubernetes and is an effort that comes into the road to a standardization of the way of serverless and FAS and function as a service are done. The main question for us as Red Hat, as an open-source company, is serverless open-source, there is the possibility to make this kind of thing that are heavily related on how the infrastructure is implemented, just think about AWS Lambda, to make it open-source, so to make it a standard in the way of having your code and having this code with the possibility to run it on different cloud function providers, on different function as a service providers. The answer, of course, as with containers, as with the other technologies, is about standards. So we are in touch with Cloud Native Company Foundation and the main subproject is Cloud Events. The idea is to create some standards in order to have specifically the references on how the language must be developed and referring to the functions in order to be executable on different function as a service providers. So the answer is standards. And of course, as many of you already know, Cloud Native Company Foundation is part of Linux Foundation, so it's something really serious about open-source. This is the serverless landscape, so you can see an up-to-date version of this map at that URL, s.cncf.io. This is part of the biggest Cloud Native landscape that many of you probably already know, and it lists all the available platforms and frameworks and tools in order to implement basically the serverless and the fast. There is a project from AWS. There should be somewhere open-wisp. There are a lot of vendors and projects that participate in this landscape of Cloud Native Company Foundation. So now to Key Native. If you go to the Key Native GitHub, this is how the project is presented. So Key Native is a way of extending Kubernetes, providing a set of middleware components. This is very important because we are talking about something that stays on top of the platform in the same way of middleware stays on top of on a traditional operating system. And the idea is that it is essential to build modern in the way of the paradigm, so function as a service-oriented application, source-centric. So as we stated about the source code, the presence of the source code, the way of you focusing just on developing your business logic is crucial. And container-based. This is also very important, of course. It's something that you can make forgiven because it's on top of Kubernetes. So far, Kubernetes is the best way to lost your containers, but it's very important to stress about that all this stuff is container-based. And the idea is that you can run it anywhere, of course, because of containers, because of Kubernetes. So in the cloud, in your data center, where you want to on-premise, of course. This is the community behind K&AD project, so you can see that IBM jumped on the train, and I think they are doing their own consideration on how to integrate K&AD with OpenWISC, but this is my personal opinion. We have read that, and same year, we did some investment in OpenWISC, so the idea is that at some point in time there will be some joining with K&AD. We have Google, which is probably the founder or one of the first founder, and we have Pivotal. So what's K&AD about? We have three main pillars. The first one is build, and we got this amazing IKEA shit on how to build stuff. So the idea is to have a plug-able model in order to build your own artifacts that may be code artifacts, like Javviles, or I don't know, NPM, or stuff like that. And also containers, starting from source code. So we stressed a lot about the fact that function as a service and server less make the main focus point on source code. Then we have serving, that is probably the most important thing in the whole architecture. So the idea is that it's an architecture that can serve event to your source code and that can have the ability to scale to zero. This is very important because all the point about server less is the ability to spike. So to have zero instance of your code when it's not needed. And then when I even come, you can scale up to the number of copies of your code, of your container that you need. So this is very important. Probably the most important feature of server less is the ability to scale to zero and then scale up to the resource needed to complete your task. So serving is a very important pillar in the K&AD architecture. And the last but not least is the eventing infrastructure. So all which is about the way to deliver events in terms of messages and calls and data to your funds, to your functions, sorry. By the way, how many of you recognize what this event is? No, not Metallica. It's Iron Maiden concept. Anyway, a bit more in deep dive. What's a build? All those concepts in K&AD are implemented as CRDs. So Kubernetes Custom Resource Definitions. And the build is basically essentially a list of container that are run in order, one after the other, in order to basically implement your building steps. So it's something very similar to what you can do with a pipeline in Jenkins. And indeed, the community is now wondering if there is a way to evolve this build concept into pipeline concept, because they are very similar. You can also have build templates. So the same thing as a build but in a template. So it's something you can reuse in a better way. And the whole idea is that you run also all these steps, all these container one after the other, and you have some kind of shared file system between each step. That could be a git record, that could be a basic file system in order to work on partial artifacts. So I don't know if it's a Java build, you may have maybe an artifact and dependencies, and then you have a jar that is produced and then you have this jar that is containerized into a container. It's packaged into a container. So you have one step after the other. Each of the steps is sharing the output of the previous step if there is one. For the one of you that are familiar with OpenShift, this is somewhat similar to source-to-image. So the whole idea is that they give you a standard way to implement a source-to-image actively on Kubernetes concepts. Serving is related to the part of routing traffic. So as many of you can imagine, it's heavily based on Istio. And there are some basic concepts. The most important concept is the service, which is not the Kubernetes service and makes a lot of confusion, but basically it's a high-level concept of having grouped the other pieces of the serving architecture, which are the route, which is uningress into your code, so the way to route the live traffic, live messages into your code. You have the concept of the configuration, which is basically the state of your deployment, so your application code plus your configuration files. And you have the concept of revisions with our snapshots of the configuration. So every time you change your configuration, you will have a new revision. And this is somewhat similar to what you can do with Istio when you have A, B releases or canary or stuff like that. You basically have many different revisions and via the route you can bring the traffic or the messages into different revisions following some kind of configuration or rules that are basically the Istio rules. So the service is just a logical grouping of all these concepts altogether. The last point, which in my personal opinion is less mature of the product, but it's already out and you can have a look at it up, is the eventing pillar of Knative, which is designed in order to be consistent with the specification of our native cloud events. And basically is a way of abstracting from a specific Q provider, from a specific broker. So you may have many different actors. In this case, you have the sources which are the event producers. You have the event consumers, which are of two kinds of family. The first family is the addressable family that basically implements the ability of receiving a messaging and the Kubernetes service are a particular implementation of addressable, so they are consumers. And you can also have the callable that are a bit more complex because they can receive events and transformate the content. And you have the channel which is probably the most intuitive concept in this architecture and basically is a named endpoint which implements the broker in functionalities. Indeed, it's often implemented by Kafka or IMQP. It's basically the provider of the messaging persistence and delivery guarantees and all that kind of stuff. The last concept is the subscription that is the link between a channel and a service or a consumer. Anyway, it's a way to link the consumer to the broker or the producer to the broker. Exactly the subscription as the name says. So what's interesting about all the K-natives here is that they map one to one with the 12 factors with some of the 12 factors principles. As many of you know, the 12 factor principles which are on the website 12factor.net are the way to develop cloud native applications and you may see that there is a one to one point that I'm not going through all of this point but as an example the idea of having the configuration strictly separated from the code is something that is very well implemented into K-native. I don't know how many of you knows this graph. Basically this is a service. It's an evolution we had many, many different versions of this. There is the author there you can even look at this blog and basically it's a way to explain the concept of something as a service with pizza. So you may go from the home made in which you manage basically everything from the oven to the conversation with your friends and you may have the party in which you may have the whole infrastructure that's being managed by the party host and you just bring your conversation and this is a metaphor for a different kind of something as a service. So you may start from the traditional premise which is like making your own pizza at home and you manage everything through a party which is software as a service you just put your data and everything is implemented. As you can see there is also a bit before software as a service that is functional as a service. What is the idea? The idea is that in the metaphor you may have to bring friends. In the real life you have to bring just functions. What is this about? The idea is that Knative does not implement a method for having a runtime of function. It just provides you the infrastructure. So the build and you can build the container. It provides a way to put your messages and your events into the container but does not provide specific runtime. What's into the container? In other words it's almost up to you. It's not really true but I would say it's up to you. You can put anything in the container. You can have a facility that spin up the container and receive the message but there is no concept of runtimes. So the real language implementation. So the idea is that you are not fully on a fast implementation with Knative. Knative just manages the serverless implementation but does not do the last step which is the functions at runtime. And indeed if you have a look at CNCF serverless white paper it specifically stress on the bundle that's one or more functions as to be a function I would say function as a service compliant if there is a compliance. So the idea is that serverless is all the set of the infrastructure. So the storage, the messaging, the events and function as a service is the implementation of the real language. Knative you have all the infrastructure. You can do fast but you have to do your last step on your own. What's the project? So what's the community I'm looking into is having this kind of architecture in which you can have, of course, CoreOS as the foundation. You may have, of course, two branches of the orchestration of containers. You may have many different components provided by OpenShift and Knative probably will be one of them on any way will be in strict relation with the OpenShift and then you will have a very thin layer of function as a service provider that may be open with, may be open fast, most people is looking into Q-Bless. So very thin runtime in order to provide just last mile of language runtime into your serverless architecture into your function as a service architecture. What are the common use case of all this serverless implementation of the function as a service? Well, most of them they come into life with CloudProvider and so are related to tasks that are burstable. So stuff like any scheduled task, you have to do something at midnight, some kind of computation and then at one o'clock is end and you want to shut down everything. You may think about image manipulation, so think about online newspaper or website or something that have a bunch of photographs and they need to convert it for many different devices and they can do it in a burstable way and just use and pay the computational resource they need in that specific moment. You can think about other encoding stuff like video encoding audio encoding that are heavily dependent on a peak of traffic and then can be just shut down and most use case are coming from stuff like IoT, from stuff like processing voice recognition devices like Alexa like Cortana and stuff like that and of course stuff like chatbots in which you will have the interaction just when somebody messages you and you have to process the message. When not to use serverless basically when you have strict dependency on the latency of the application because of course the spin up of the first container maybe something computational heavy of course they are working on to avoid this penalty but there is a penalty the long running task that basically can't be split into specific steps so that can take advantage of the parallelization of the task when you have very huge memory and CPU requirements so the idea is to have small step with low requirements and not one huge step with heavy requirements and especially when you can't deal with a cold start so when you need something that has a warm up or caches or stuff like that this is not good because as we have seen in serverless you start from scratch every time. So I think that we are okay we are in time so we have a couple of minutes for questions. Please be. What's the long run? As we say long run, we have 5 minutes or seconds, an hour? That's a good question I don't think there is a rule on this it's probably I repeat the question for recording the question is what is a long running task? I don't think there is a specific definition I would stress more on parallelization of stuff so if your task is heavily parallelizable it's probably a good fit for this kind of runtime but I will not see like if it's a matter of minutes it's a long task probably if it's a matter of hours it's not a good fit but there is not a specific rule on this. What are the channels for passing the event supported out of the box? If there are some kind of channel supported out of the box in Knative I would say that Kafka is probably the most used implementation so far I don't think there is any official support so far because the project is incubating I would expect at least Kafka and probably some main Q provider like active entry stuff like that. So the question is, and correct me if I'm wrong if I already have an infrastructure on premise what is the advantage of having workloads as serverless as I'm still paying for the whole infrastructure? So the answer is that with serverless you can mix workloads in a better way. So if you have, I don't know, if you are a bank and you have batch workloads that run at night and then you may want to have more computational power at the day because you have your people coming into branch offices and it will work load in a better way. So you just use the computational power of our batch on that specific day. Of course it is something that you can do also with containers but hopefully with function as a service it will be more granular. Personally I don't think we will migrate everything on function as a service but it could help in an heavily micro service oriented architecture because it can make some glue here and there. Do you have time for other questions? Thank you very much.