 Our next speaker is Matatamel from Google. We'll be talking to us about Kubernetes support for serverless computing, which frankly makes me uncomfortable if I say it. Great, take it away. Thank you. Okay, let me set my time because I only have 20 to 25 minutes. All right. Hello, everyone. My name is Matatamel. I'm a developer at Google. I'm based in London. It's always great to be in Singapore because the weather is always much better than London, so it's great to be here. I normally do this talk for one hour, but I only have 20 minutes now. So basically, I have 20 minutes, which means that I have to speak really fast and I have to skip some slides and some code. But the good news is that if you go to my GitHub page, I have a Knative tutorial that has these slides, and it also has a lot of the demos, well, much more than what I'm going to show today. So if you find Knative interesting, you can just go there and you'll find much more than what I'm going to show today. Also, this is my Twitter. I usually tweet about Istio, Knative, Kubernetes kind of stuff. So if you're interested, just feel free to follow me. And also, I'm doing a tour in Asia, and as I go through it, I ask people what they know about Knative, what they are using for serverless, whether they are running on Kubernetes. It's a five-question survey. So if you have time, please fill it. I also have a bunch of t-shirts and things that I put here. So after the talk, feel free to come and grab some, okay? All right, so let's do a quick intro to Knative. What is Knative? Knative, it's a Kubernetes-based open-source building blocks for serverless, right? And this might sound a little bit weird because when you think about Kubernetes, you're thinking about containers. So there's nothing serverless about it. And then this Knative thing is it's claiming to be serverless on Kubernetes. So how does that work? So hopefully, we'll be able to explore that today in 20 minutes. So when you think of an ideal serverless framework, first, there should be no servers. I mean, obviously, there's always servers, but you shouldn't care about them as an application developer. You should be able to write your code in an idiomatic way. So for example, I'm a C-sharp.NET developer. So I should be able to write my code in ASP.NET Core. I shouldn't have to change my code. I should be able to do... Or if you're a Java person, you should be able to use the tools and the language frameworks that you're used to. When you think about serverless, we usually think about functions that are driven by some kind of events. These events can be HTTP-based, but they can also be like message-based, but they're usually event-driven. And last but not least is that you want your framework to probably be portable, right? So if you write your function and deploy it to one place, you probably don't want to rewrite the whole thing just because you want to deploy it to some other place. So when you look at the containerized world, Kubernetes kind of became the de facto standard for running containers in the cloud. I won't get into details because we don't have time, but basically when you run containers, you probably run Kubernetes as well, either directly or indirectly. So when we think about serverless, like developers love serverless because you are trying to solve a problem and serverless enables you to write the code and not really care about the underlying infrastructure. It's someone else's problem, right? And operators, they like Kubernetes because it gives them this common language and common framework to schedule and run containers, right? But developers, they don't really care about those details. So there's this tension between developers and serverless and operators and Kubernetes. K-native exists. It's an open source project that started with Google, but now a bunch of other companies like Red Hat and IBM, they're supporting as well. And it's an open source project so anyone can contribute. It basically provides a set of components and these components are serving, eventing and build that will take a look. You can think of these as the ingredients for serverless, the things that you need to build a serverless framework. K-native tries to do that for you, so you don't have to build them from scratch. And the concepts in K-native, they came from our internal learning. So I'm going to show you some of the things in eventing. So all those concepts of configuration and revisions and routing, they all came from how we work at Google. So we kind of learned from them and we kind of open sourced it for everyone else to use. Now the K-native stack kind of looks like this. So the platform is Kubernetes. Then on top of that, there's Istio. So Istio is also a dependency to K-native. Istio, for those of you who might not know, it's another open source project from Google and other partners as well. You can think of it as a way to manage your container traffic. So Kubernetes runs your containers and Istio kind of manages the traffic of your containers. And on top of that, we have K-native with build, serving and eventing. And on top of that, we have the products that kind of use K-native. So there's something called Cloud Functions in Google Cloud, which is a serverless offering from Google. And in Cloud Functions, we announced that you will be able to run containers with Cloud Functions so that container support on Cloud Functions is built on K-native. And then our partners, they have SAP and Pivotal, they all are going to have their function service that are kind of built on K-native. So that's kind of like the hope with K-native is that it's going to be the base layer for most people more things on top of that. And the whole thing runs on Kubernetes. So how do you use K-native? Well, first you need the Kubernetes cluster. This cluster doesn't have to live in Google Cloud, by the way. It can be on any regular Kubernetes cluster. But in this case, I'm creating in Google Cloud using gcloud command line tool. Once I have my cluster, I create a cluster role binding. This is needed for Istio to manage my cluster. So I do that. And then I install Istio. In Google Cloud, you can get a Kubernetes cluster with Istio with a single command. So if you want to do that, you can do that. But if you already have your Kubernetes cluster running, you can also install Istio manually yourself. And then once you have that, you install K-native. The serving, building, and eventing, they're separate things. So you don't have to use them all at once. You can install them one by one. But if you want all of it, then you can also install them all at once as well. And this is going to get much easier in Google Cloud. At some point, you will be able to say, give me a Kubernetes cluster with Istio and K-native and everything installed, and it will just give you that. But as of today, K-native is really new. So you have to go through these steps to install it. But it's very, very quite simple. So let's talk about the first part of K-native, K-native serving. But before I do that, let me just show you something. So here I have, first of all, I already have a Kubernetes cluster that I set up before the talk. So if you look at Google Cloud Platform and on Kubernetes Engine, there's a K-native cluster that I already set up. And let's just make sure that my K-native things are working. So if I do kubectl, I'll get pods, and then we'll get the K-native serving namespace. So everything gets installed under the different namespaces. So I'm just making sure that the K-native serving pods are working. Hopefully they work. Otherwise, the rest of the talk will be kind of just slides and no code. Oops, sorry. I shouldn't laugh. All right. Is Wi-Fi working? Yes. All right. So those are working. And while we're here, let's look at eventing. That should be quicker. Yes. Those are running as well. And the last one is built. So these are different components. Okay, native. That's working as well. And let me just move this a little bit down. Yeah. Okay. So everything seems to be working. So let's just quickly talk about what serving is. So serving basically enables you to deploy a container in a serverless way. So what you do is you basically define a YAML file and you say, this is my service. This is my container image. And this is the configuration of my container. And you just tell K-native, can you please deploy this? And it will deploy it. And then by default, it will be running on a single pod, but you get auto-scaling for free. So if more people start calling your service, it will be scaled up to like a maximum number of pods. If no one is using it, it will scale down to zero. So it scales down to zero automatically. And all the details of getting people, clients connected to your service is done automatically. So once you deploy your service, all those routing rules are done automatically. And the integration with the underlying networking is also done for you, right? So you just specify your service, your container, and all the details of how to run that service is done by K-native serving. There's some vocabulary that comes with K-native. So first, you define a service. By the way, this service is not a Kubernetes service. It's a K-native service, so it's a separate construct. And this service has a configuration that I'm going to show you. And once you deploy the service with the configuration, K-native creates what's called a revision. So a revision is kind of like a snapshot of your service with the current configuration. And then it will update the route, the current route to that revision. So meaning your service will be deployed and people getting to your service will basically immediately be routed to that revision. Now, if you change anything about the configuration, K-native will create a new revision and it will update the route automatically. So it has this notion of revisions and you can update the route immediately to the latest revision, or you can also split traffic and I'm going to show that as well. So that's what K-native serving is. In my tutorial, I have many examples, but let's just look at a couple of them. First, let's look at service V1 YAML. So this is my service definition for K-native service. As you can see, this is called a service, but it's from K-native API. It's not Kubernetes service. And then under here, I'm saying run latest, meaning when you deploy this service, just immediately route traffic to it. That's what it means. Always run the latest configuration. And this is the configuration on my service where I specify the container and this container is pointing to a container that I already deployed on Docker Hub. So it's hello world C-sharp version one. And this is the environment variables that my application is using. So I have an environment variable called target that has a value called C-sharp sample V1. And it will use this to print that message, right? So if I want to deploy this, what I can do is I can come here and say, well, before we do that, let's just watch some things. So I'm here, I'm doing kubectl, get k-native service, get configuration and get route. Right now I have one pod running. That's for something else. Let's not worry about that right now. But if I do this kubectl apply and say service V1, now this will deploy the service to K-native and now this says the K-native service is deployed. And if you look at it, this created a pod. It's creating a pod for my service. It's created a service. It created a configuration. And we can't see it here, but it also created a revision, I think. Let's just watch again. Right. So it also created a route and revision and configuration. So everything is created. And if I do a curl, let me find my curl command. Yes. So if I do a curl, so this curl command, hello world C-sharp is the name of my service. Default is the name of my namespace. K-native ingress is the IP of my K-native ingress. And I set it up with nip.io. So I didn't set up a domain. So I'm using this fake domain service called nip.io. So I'm basically pointing to my service. And we get a response already that says hello, hello C-sharp V1. Right. So everything is working. Now if I want to update my service, for example in here, the only difference between service V1 and service V2 is that we are still using the same image, but we changed the message to V2. Right. So we changed the configuration. If I deploy this, keep CTL apply. This will create a new configuration and a new revision and a new part already. And if we do curl to the same ingress, you see that it's already V2. Right. So things are quite fast. If I go to V3, in V3 I actually replace the image. So if you change the image, that's also a configuration change. And I change the value again instead of V2 I'm saying V3. So now I'm pointing to different code. So if we deploy this one, now it's configured and you will see that a new pod and a new route and everything will be created. And if we curl again, you see that now it will say buy C-sharp sample because we changed our container so it's running different code. And lastly, now we have three revisions, one, two and three. If you want to deploy a new revision but you don't want all the traffic to go to it, you want to just test out like 20% of the traffic to go to that, you can do that as well. So if we take a look here, I'll wait to switch. So in here, instead of saying run latest, I'm saying release. So that's a different mode instead of like deploying the latest version right away. I'm just saying I want to do a release. And these are my revisions. So I'm using two revisions. One is the first revision I deployed. Four is the next revision that I'm going to deploy with the CML file. And then I'm saying, rollout percent is 20, meaning I want this new revision to get 20% of the traffic, right? And then this new revision is pointing to the same image, but it's going to say v4. So v4 is the new one that will get 20% of the traffic. The other one will get 80%. So let's deploy this one. Ctl apply service v4. And if you do curl, then when there's a hang, it means it's creating the pod so it's waiting for the pod to come up online. And by the way, as you can see, some of the pods are terminating because I'm not using them. I'm not using that version of the service so they go down to zero. So now it says v4, but if I do curl again, most of the time it will say v1, right? And 20% of the time it will say v4. So that's the kind of stuff that you can do. All right. So that's Knative Serving, Knative Eventing. Basically, Knative Eventing, there's a lot here but I have to do it really quickly. But basically what it enables you to do it enables you to connect event sources to event consumers using some kind of flow in the middle. That's all there is to it. There's a bunch of different event types already implemented. So if you want to listen for GitHub events, for example, there's a GitHub event source that you can use. So anything that happens on GitHub, you can listen. On Google Cloud, there's something called Google Cloud PubSub, which is a messaging framework. You can listen for Google PubSub messages. And there's a bunch of others, so basically there will be more and more here. But basically it enables you to connect event sources to event consumers. So the way it works is that there's two different modes. There's simple derivative and a more complicated one. But to be honest, I think the second one is more common. So you have the event source. This is the thing in Knative that listens for the external events. So if we are listening for Google Cloud PubSub, it will listen for Google Cloud PubSub messages. Then once it gets the message, it has to do something with that, so it passes that to a channel. Channel can be in-memory channel, or it can be more durable channels. And then from the channel, you can go to subscription. So a service will have a subscription to the channel. And then from the channel, the message will go to the subscription, and from subscription it will go to the service. Or you can change services. So from channel, you can go to another channel, and that can go to other services. So you can do chaining as well. There's much more detail on my tutorial about this that you can read about. But what I want to show you here is that I have this sample where I'm connecting Google Cloud Storage to Google Cloud Vision API using Knative in the middle. So Google Cloud Storage is a place where you can create a bucket and you can save any kind of files you want. And the cool thing about storage is that you can enable PubSub services on it. So you can say, on this bucket, I want you to send me a PubSub message when someone uploads a file. You can say things like that. So when someone uploads a picture, it goes to Google Cloud PubSub. Google Cloud PubSub will send a message to a topic, and then you can have Knative eventing listen for the topic, get the message, and then it will use this channel in memory channel to write it to a Knative service. And then Knative service can do whatever it wants. In this case, it makes a call to Vision API, which is a machine learning API in Google Cloud where you can pass in images and it uses machine learning to analyze the image to get the labels out of the image. So as you can see, storage and vision are two different things, but you can kind of glue them together using Knative. So let me show you this quickly. So if we go to where is it? Eventing Vision So I already created the source, so we are already listening to GCP PubSub messages. I already created a channel in memory channel, so once we get the message in Knative, we already save it to a in-memory channel. And now I want to create the service that actually will get the message and also the subscription of that, right? So what I'm doing here is when I say Coup CTL Apply Subscriber and let me show you this quickly. So the source is already created, the channel is already created, I'm not showing them to you because we don't have time, but the Subscriber first we define a Knative service so this is a regular Knative service that points to some Docker image that I already created. One thing I did here is that I set the autoscaling limit to one meaning I will always have one pod running. It won't go down to zero because I don't want to wait, so I always have one pod running. And then I also create a subscription. So the subscription is basically listening for messages from this channel that I already defined and it's connecting to the service. So it's making the connection between the channel and the service that we created, right? So if everything went well let me let me do this, Coup CTL, I get pods. So as you can see, I have my Hello World they are terminating because no one is using them but Division 1 is running. That's good. I want to look at the logs. So let me check my logs. Yeah, let's do this and Coup CTL, I get pods. I need to get the pod ID. All right. So get this. So we are now looking at the logs on my service. Now it's listening on pod 8080. All right, now let's go to here and upload this image. No one knows where this is. Any guesses? No one? All right. It's Ipanama Beach in Rio which is one of my favorite beaches. So let's upload this image to Cloud Storage. So this is listening. If you go here Cloud Storage I have a bucket for my Knative so this is my Knative bucket and I enable PubSub messages on this so anything I upload here will generate a PubSub message. So upload files, choose the beach. All right, ready? It's uploading. Hopefully it works. Come here. Boom. You got the message and now it's making a call to Vision API and then Vision API said this picture is labeled sky, body, water, sea, nature, coast, water, sunset, blah, blah, blah, blah. So we made the connection from storage to Vision API. We have, for example, one from Twilio where you can send a message to a number and it can use Knative to reply to that message in different ways so it's pretty cool if you have time to check it out. All right. And the last thing I want to talk about is Knative built. So Knative built in a single sentence it basically allows you to go from your code to a container in a registry. So you can deploy you can build your code and you can create a container image and you can push it to Google Container Registry or you can push it to Docker Hub, things like that. There are some primitives. So build is basically the thing that it's basically the number of steps in your build. You can use many templates so there's many templates that you can use or you can take your own build and make it a template and you can use it for multiple builds. There are different kinds of builders that you can use. And finally, if you need to authenticate let's say with Docker Hub you need to have a service account so there's a service account as well that you need to set up. Again, I have examples of this but I want how much time I have? Zero. Okay. Just to show I just want to show you the YAML file in here and then we can finish. Or maybe I won't show it. Okay, maybe I'll skip because it's just right here. Just one sec. So this is Docker. This is a build that pushes Docker. So all we are doing here is that we are giving the name. We are saying this is the service account that I use that I will define in Kubernetes. Then this is the source that I want to build and the path to the source and the steps is basically I'm using Canico which is an open source project to build my image and this is my Docker file and the destination is defined here. So once you do that, you just say I point to YAML and this all happens. Okay. That's all I unfortunately have time for. Thank you very much. If you want to grab some t-shirts and everything, feel free. Thank you.