 Hello, everybody. Oh, so quite people than I expected. So what I'm going to talk about today is the KN80 stuff. So kick one, how many of you are, oops, sorry about that. So how many of you are some experienced about serverless computing? Nobody? Oh, just people, all right. And how many of you have heard about KN80 purple? All right, cool. So today, I'm going to be talking about KN80 makes developer awesome in the serverless journey. So let me talk about me first, but for free. So my name is Daniel. I'm a DevOps evangelist at Red Hat, and Azure is working as a solution architect. So I'm specialized in more cloud native app that based on Azure and DevOps practice, rather than just single, ready a product or some methodology. So I got many chance to talk about this agenda, cloud native, microservices, DevOps, DevSecOps, and even listening our serverless with the enterprise developers. So just kick one one more time. So how many of you are enterprise developers or just developers? Oh, yeah, perfect. And out of Red Hat, I'm CNCF ambassador. So I try to spend a lot of time to give some inspiration who needed to inspire about to looking for some better way to build cloud native application based on CNCF project. Of course, Kubernetes and ProMitfields and Envoy Proxy. And recently, we have another good project, Graduate Decor DNS as well. And actually, I'm a Java developer more than 16 years. I love the Java technology. And also, I really love to writing some technical and non-technical protocol in officers.com in my private blog. So if you have any question about me, about my specialty after this session, you can follow my Twitter. You can go through my GitHub repo. And also, you can send in my email directly. So this is the bottom line of today's session. So if you're looking for something like the fundamental stuff, like what does that mean of a serverless computing, platform, and function as a service like a fast. So I don't have enough time today to go through every single detail. But luckily, I wrote about this fundamental stuff in open source.com. So just you need at least 10 minutes to go through this article. So seven opens as a platform to get started with the serverless computing. So rather than talk about that fundamental stuff, I'm going to talk about a little bit more the developer experience stuff. So in the meantime, I asked many times to enterprise developer, how does enterprise developer imagine the serverless computing? But the answer, more than 80%, more than 90%, was the Amazon Lambda. So Amazon Lambda is really cool. But there is some big challenge if you run your serverless application or container application, just Microsoft's application on top of that. I'm going to go through a little bit later. And this is a developer thinking about what serverless is. This is a principle as a developer point of view. So developer doesn't worry about the infrastructure to run serverless application. For example, so I don't want to take care about how many virtual machines should be provisioned before I deploy my serverless application. And also, they're looking forward to autoscaling capability along with their rapid changes of workloads. And last thing is the money. So if you use the Amazon Lambda, you need to pay for the big money for the amount of your time you are consuming your Amazon Lambda. So I love this picture because there are still many people developer IT operation team and some even CIO stuff. They misunderstand what serverless is, like just picture. There is no server in the data center, but that is not true. There are many servers, maybe 100,000 servers, because the public cloud provider will take care of the instead you and developer. So this is the most important three things to address your serverless architecture, serverless application on your local machine, or on your on-prem infrastructure, and even public cloud. Event, services, and functions. So I'm going to go through how to relate those three things as part of the developer journey. So many, many years ago, developers have a big one monolith application. So maybe this is something Frankenstein, big monolith application, but this is a perfect. That is true because this monolith application contains 100% functionality to address your business requirement. But there is a big challenge that you probably know that it's very hard to maintain your monolith application. You need to spend a lot of time to maintain that. So that's why developer began separating or splitting their big monolith application into small pieces of modules. But the truth is, the module should be packed with one single artifact, like a wall file or a Java in Java technology, for example. And also, you can have some big middleware, like Oracle, IBM, and another big middleware stuff. That's why microservice was born. So since 2014, the Spring Boot was born, and many developers really love to use Spring Boot application. The microservice application fundamentally have their own independent runtime environment based on Java technology and also the runtime environment. And in the reality, you are in a project production, so your microservices have very complicated architecture, like network services. And more importantly, so your microservices or a modern application might have multiple entry points, like a RESTful API to communicate another microservice application, and also some invocation requests from end-user based on web browser, like a GUI stuff. And also, there are multiple data sources rather than just a big single, united database. So you can follow to designing perspective as a CQRS, rather than cloud, and event sourcing, or a SEGA pattern, something like that. So now you have a function in this architecture, in the network services architecture with the microservice application. So what is that function means for your serverless application? So go back to the three things. So now you have many events. For example, hey, I needed to render in my 3D images to provide end-user based on GUI, or I needed to save or store my log file to tracing when I have some error, something like that. That kind of all is events. So event called function. Function means your application. The application, very small piece of your function, might call the backend another microservice application. So as a developer point of view, so why is the easiest way to run those three things and address the events and functions and backend services? So I already mentioned earlier that it's that the public cloud provider, like Amazon Lambda, but there are two big challenges if you use the managed service to run your serverless application. The first of all is the cannot address multi-cloud strategy. So if you adopt one single public cloud vendor to address your serverless workloads, you cannot build some multi-cloud strategy. You want to run your many thousands of functions on top of the private cloud, public cloud, even hybrid cloud including on-prem. It's not possible if you use just Amazon Lambda or Google Microsoft functions. And second of all adds more of the developer point of view. So it's a constraint of dependency. So some of your developer team wants to build or develop some microservice application or serverless application using Java technology or a JavaScript or a Python or any other things. But maybe if you use Amazon Lambda, they'll provide just a couple of the long time, including Java, but not everything. So that's the point of the challenge. So what is the solution to figure out this problem? Simple. Managing by yourself and your team. And in the meantime, luckily we have such a great technology. You probably already know that. The Linux container. So for example, the OCI forming container, images container, now enable the developer to run 100% perfectly with the artifact. Such as you can pack your application code and runtime and depend on library. And also the developer doesn't worry about some service discovery, registering, networking, and something else because Kubernetes will take care of that instead of developer. And it's your service mesh. So you know the Netflix OSS stuff and Spring Cloud, Spring Boot, the one big challenge for developer. So developer should take care of all configuration and Microsoft's capability, such as logging, tracing, and intelligent routing, and for torrents such as the circuit breaking, which means the developer injects some configuration or some YAML file or some property file into their application code into their method in Java classes. But service mesh based on STL will address instead for the developers. And all APIs. This is a beauty of the current architecture of the public cloud, even internal data centers, some services. So now if you, for example, a developer wants to, hey, I need some back end data services. Yeah, you don't need to implement where you don't connect to a back end database itself. You just call API, a data service. Or I need a story. And you can call it based on F3, data service. So all decision technology make it easier, quicker, to develop, build, and manage your service application. And more importantly, the Kubernetes with the service. This is a beauty of the Kubernetes. So there are more than 13 service and a fast opposite of project based on Kubernetes. So if you might have some interest in about this project, you can go through the landscape, the cncf.io. There are some of these, the cncf service working group. So, but truth is, developer is not easy for the, Kubernetes is not easy for developer. That is true, still. Because that container and Kubernetes, and which means the developer needed to learn a lot of things about feature and some command line or some architecture, how the Kubernetes works, and how to use that for running your service application. So that's why the canaver was wrong. So just the funny thing is, maybe six months ago, so I met with a customer to talk about the serverless strategy of Red Hat. So we just told about the patch up for WISC. This is our cloud function based on a push container platform. But now we just got to talk about the Knative. So Knative is, of course, based on Kubernetes. And now developer can build, deploy, and manage your serverless application on top of that. So this is a primitive. So there are three primary components of Knative build serving event. I'm going to show you a quick demo just right away, how to stand out your serverless application for developer point of view. And most important is the issue of service mesh is as a default running on top of the Knative. So I'm going to show you some quick demo. So after this session, you can go through yourself. It's a Red Hat developer, the Knative tutorial. So there are many documentation how to set up your local Kubernetes platform. So Minishift is based on Minikube. And how to install your Istio stuff. And how to install Knative service. It's actually not, it's technically not installation. It's sort of the deployment as running over the path. So this is my local machine. And I already stand out my container environment. So let me go click through my local environment based on Minishift. So I already stand out my local environment based on Istio service mesh. And Knative build Knative serving. So you will still get part Istio. So you can find there are some default part about Istio service mesh like ingress, egress, and sidecar, et cetera. And also there are K&E primary component build and serving. Yeah, it's all fine because they already stand out and finger crossed. The network is not good. All right, so here's my demo scenario. I'm a developer. So what is the first entry point to go jump into the summary journey? The first thing is that you have to develop. So your own application. Maybe you can use the Spring Boot. So here is my Spring Boot application. I already, my simple Spring Boot application. So my artifact is greater. And the version is 001. And some dependency is very simple. Spring Boot starter, Spring Boot web, Spring Stutter test. No more. And this is my simple application, greater. Hello, Ken Ebi already. I changed some code. Dev com check. So imagine that I already done implementation of my small piece of microservice application. So what is the next thing is I'm going to view my Spring Boot application. I'm going to skip test to save my time for just demo and clean package. It will be just a couple of seconds to build success and just make sure I have, if I have some artifact like a Java file, target, and grid. Yeah. Just making that. And what is the next thing is the, I need some unit test for making sure my application is working. So Spring Boot, the one of the beauty of a Spring Boot just when you need two seconds to run. It's already there. And you can check by call localhost. So hello, Ken Ebi on Dev com. Oh, you can use the web browser localhost 880. Yeah, same. All right, cool. So now you're already done to develop your Microsoft application and you make sure how it works based on unit test. What is the next thing is you have to containerize your application before deploying the Knave. So I'm going to shut down my local environment. And once again, I'm going to skip my test to save my demo time and clean package. I'm going to use a Jeep utility for Docker build today. And also the primary of Knave, the Knave build is the one of the feature to build your container image based on multiple steps. And you could repurpose the remote repository. But today I'm going to show you using the Jeep utility to make it shorter. And now I have Docker images and glitter. So what is the next thing is I need to deploy my software application based on Microsoft's application. So I already have a define. So Knave, as a default, is using the Kubernetes CRD object, the custom, the resource definition. So you can find that the serving Knave dev is just one of the CRD of the Kubernetes. And also I'm going to use this container image that I just created. So just simple, you already know that Cube apply my service demo. But in order to make sure, get Cube CTL, pass and watch. So there is no part in my project, in my namespace. So I want to deploy my service application. And just like that, at exactly the same time, I deploy my service application. And you can see there are two slash two, which means the sidecar container will be injected automatically by Istio service mesh. And I can call to make sure my service application works. Come on, come on. We should have some great console during the time. So this is my DevCon project. And I already deployed that. It's a really big natural problem. And one of the I wanted to show you last thing is, this is one of the auto-scaling of the configuration of auto-scaling of Knave. So you can see here, so great superior and threshold, which means in this demo, I just configured to one minute, which means during the one minute, there are no requests from end user. This is your Microsoft's application will be shutting down automatically, scaled down to zero. That is the beauty of the service application. So you can see that they are already terminating. So go back to the mini ship console, the web UI already terminating. And after, if you call once again, your application stand up automatically. So that's a problem. Okay. So go back to the, I have a lot of time today. So go back to the slide. All right, to serving, the one of the three component of the Knave serving is to provide the functionality to scale down to zero after there are any requests from your end user. That is one of the mandatory functionality of the service platform, a second mobile. So today in the demo, I just use a Jeep utility, but you can use the build primitive on top of the Knave to build your container image. You can define multiple step and you can push your container registry, you can retrieve some Git repository to access some, your application code itself. And last thing is the eventing. So imagine that you are, there are a bunch of the service application or service container, but it's happening in the different time, which means the, some producer create your service application, but the other is using that consumer using that in a different time. So eventing will take care of that with the lazy binding. So the last thing is, so now developer can use that Knave just instead of the Kubernetes itself, or you can use the, if you have a fast platform like Apache OpenWizk and another project, if you want to more some fancy CLI or such as debugging and when you are, et cetera, et cetera. So here's my sources. So you can go through the opensource.com. There are many subways and Apache OpenWizk related article and there are redhead blog and also Knave tutorial. You can follow that yourself. And we are out of time. Thank you for that.