 So I am Max, and I work as a Distinguished Engineer at IBM. And actually, I was one of the people after Google made Knative Public to start contributing to it. Obviously, lots of folks from Red Hat, VMware. I'm sure I'm going to forget some names. But the project has grown, especially the folks that created Chain Guard. Most of them were from Google. And then they created Knative. And I worked on that for about two years, or 2 and 1 half years. And then now I'm working actually on what we call quantum serverless. So it actually uses Knative underneath. But my mission is border to address that. It's actually quantum, open quantum. So we're not going to talk about that now. That was a talk before. That's why I was late. But I'm here with my colleague, David Hadas. And actually, I'll give a lot of credit to David for this talk, not only for putting it together, but also it came from a blog post that he had. So David works in Startup Nation. Anybody knows where that is? You all know Silicon Valley, right? But there is a country called Startup Nation. Yes, I've never been. But I've collaborated with so many people from Israel that it's probably true. It's Startup Nation. So anyway, he's from there. He worked 10 years, I think, trying to build startups. And then you give up and join us. And he's an expert in security. And when he contacted me to help in Knative, he's made a big difference. So he's going to talk about some of the work that he's also doing. But what's interesting about this talk is that it tries to position Knative as the place you should start. So let's get to it. So obviously, when you're talking about cloud computing, in terms of the big picture, it's pretty much what everybody is using. So I'm here because this is cloud native. And people are using cloud computing for all types of problems, solving problems, even quantum, which is what I just came back from, a talk on that. But there is some downside to it. And we wrote a blog post, at least for some aspects of this. So starting from the bottom, for instance, I think yesterday, if you are at the keynote, folks, colleagues from Azure are talking about this, as well as Red Hat, where there is actually a lot of energy that's being consumed by those data centers. And if we keep going in that direction, it's not going to help the problem that we all face. I mean, I don't have to tell you. Hopefully, you're not a climate denier. But if you are, maybe we can talk about it offline. There is quite of issues with the climate right now. So what could we do, for instance, as computer scientists, as people in the community, to make, for instance, those cloud computing centers a little bit more energy efficient? So that's one of the downside. And the demand is going to keep going. So it's not a downside that we're going to have tomorrow. It's a downside today, and it's going to keep going. So one of the things that we believe, and we don't have a proof of it, so it's not something I want you to bet on, but it's an intuition. And it's essentially serverless. So the approach of running software for just the amount of time that you need, just the amount of resources that you need, being very dynamic in how you go. So instead of scaling and trying to predict the scale, you scale based on the demand, it's going to help. Partly because you're using less resources. You're using just the amount of resource that you need. And when you start multiplying this by all the workloads we're all running and the rest of the world, the intuition is that it will help reduce some of the energy pressure that we're putting on cloud. So we wrote this blog post. It's on the IBM cloud on how we believe Knative, for instance, is a good direction in that. And also colleagues have built on this with, for instance, Kepler as a model for you to measure, for instance, the amount of, at least the carbon offset that your computation is doing. So a lot of clouds you're going to see is going to expose this. But that's just one benefit. There's a lot more. So David is going to walk you through the benefits that we've identified of using Knative. Not really instead of Kubernetes, but using Knative as the way to use Kubernetes. And then, of course, because Knative, you can form Knative use Kubernetes. If you need it, you can also use it. So David. Thank you. Can you hear me? Yes. So we'll start with the question that this old dog doesn't know the answer for. Because every person I would ask would give me a different answer, which is, what is serverless? Now there are different concepts about what is serverless. We will define it for this talk. For this talk, we talk about serverless when we talk about a situation when you approach the cloud with your whatever service you want to deploy there without telling the cloud how many resources the cloud needs to associate with that service. So if you take a step back and you give the cloud the decision, how many resources are going to be deployed for your serverless? And of course, it means that the cloud needs to measure that and decide that for you. Then in this case, this is what we will call serverless. And of course, it goes hand in hand with pay as you go. Because if you didn't define how many resources you're going to use, then you need to pay as much as you're actually using. So there used to be these two options for you. You either use microservices and you define exactly how many resources. I want 27 pods of this deployment. How do you get to that number is a good question. But that's what I want. And that's the other option, which is serverless, which says, I have no idea how much I want. It's dynamic. I will let the cloud do that. And that's serverless. But the truth is, it's not really two options. There is a whole range of other options. You may say, OK, I want at least three pods, or at least one pod running all the time, because I want that dynamic service like microservice. It answers every request I have. But then when I need more, then please make sure I have more. And I don't care how many. If I need 300, use 300. And then I may say, OK, I want between 3 and 10. Don't ever use more than 10, because I'm never going to pay for that. So this is a completely dynamic range. And it leaves the question, what is serverless completely irrelevant? And that changes how we see Knative as well. So let us redefine what Knative is. Knative is your way, your automation layer, to deploy native Kubernetes services. And what is a native Kubernetes service? A native Kubernetes service is one that follows the best practices, 12-factor-up. If you are building microservices based on the 12-factor-up methodology, then you are probably good to go with Knative. Serverless, of course, as a result of that, any serverless application that you're building would also, they also comply with 12-factor-up. And they would also be considered a native service. And also, if you're going for the new Knative functions, which is a build system that helps you create new services, then those functions at the end of the day are just container images. And those container images are right there. And those container images are running serverless. So it's a native Kubernetes service. So now we make a distinction between running against Kubernetes. And in that case, we talk to the QB API and do whatever we are used to doing. Or we talk to that automation layer, which does that for us. So it is like a layer that would simplify things, implement the best practices for us, and we have now less complexity as a result. So instead of running all and making sure that we get all the Kubernetes resources right, we essentially have one important resource to manage, which is a service resource. And that's like a new obstruction layer that is added to Kubernetes to allow us to deploy our services. So that reduces our complexity as a result. It reduces how much we need to learn or our team needs to learn in order to deploy, and run, and operate those services. In a way, you can think of Kubernetes as a non-opinionated system. It allows you to do everything, deploy anything you would like, unlike Knative. Knative is very opinionated. It defines, it has embedded in the best practices, and it defines what are the best practices that you should follow. Now that may be a good thing, and that may be a bad thing for in certain situations. So you don't have to use Knative for all your microservices. But for most of your microservices, that will bring you uniformity, that will bring you, as we will see later, more security. It will bring you the ability to operate faster, with your services, and so forth. So this is like, instead of getting all the bits of pieces to create a camera, you get a ready-made camera. You can create great shots with it, but you can't make coffee. So by the way, these are, on the bottom, are examples of how simple it is to deploy a Knative service. It's just a one-liner. But of course, you can use YAML to do that, and you can define everything that a service can have. But that's just if you just try to play with it and create your service very quickly, you can also do that with the CLI. So let's talk a little bit about that auto-scaling that Max also referred to as part of our path for reducing energy. So I want 27 of these, and 17 of these, and six of that. And how did you get to those numbers? Well, I don't know. How do you get to the right number of resources you need? How many pods you need for each microservice? Is that the magical number that you have in your pocket? No, you probably overprovision. You probably just say, 32, that should do it. And if it will not do it, then I'll increase to 48. It's a good number. Why not? Then you overprovision everything, just everything. All your microservices are overprovisioned. And that's a very bad thing for managing your resources, of course, because at the end of the day, they all introduce some overhead, and they all consume energy one way or another. So you really need that magical thing that would follow up dynamically with your load and would decide, now you need 4, now you need 40, and change the number of pods for you. So autoscaling is an essential part, and that's part of what Knative offers you. Now, one more very nice future that we would have for that is, look at that picture that we see over here. We see all those provisioned pods which are not being used. Of course, when you have those pods provisioned, you cannot reduce your energy, and you cannot reduce the number of VMs in your cluster as a result. But if you are able to eliminate them through some magical way, which is an autoscaler, then you only have a bunch of pods that you control and you know that are actively working. This will give you, in the future, as we move forward, to do the scaling of the cluster, dynamically reducing now the number of VMs, keeping some VMs extra for your growth, and then growing those numbers once you start using those extra VMs, always keeping something because it takes time to bring up a VM, let's be honest. So that's another benefit that we have which comes with Knative. The second thing is, OK, we are using microservices. Most of us are developing microservices for Kubernetes. Microservices were designed in this mindset that you take the big problem, you divide it into small problems, and each problem has a very defined API so that you can change the revision of that little piece and continue growing it, developing it iteratively. So to do that, it's very nice to keep all those revisions as images, but you also need the deployment system, the Kubernetes system, or the Knative system to help you do that. Because there are risks in changing revisions, you may want to decide that at the beginning you want to start up with just 1% or then go to 5%, then go to 25% for this new revision, and over time, you're going to trust that revision enough and throw out the older revision and go to the new revision. So revision management is a crucial thing that we all need in Kubernetes, which comes as part of Knative very natively. And we will demo that also later on. And the next thing is my pet topic, which is security. So first of all, the first bullet, Knative community is presently working very actively to add TLS to Knative such that you would not be required to rely on your service mesh for TLS. So you could get, you could either use your service mesh or you could get Knative with TLS without requiring you to deploy service mesh. That's the first thing. Second, we just talked about rolling out new revisions. That help you when you have a patch for security reasons. So it will expedite the entire process of your teams getting new patches to work because it reduces the risk of each new patch. You can deploy the patch and see that everything is OK very quickly and then continue with that patch. So that's another security aspect of Knative. The third aspect, we're going to talk in the next few slides. So I'll leave it for now. But basically, it says, OK, we have all those services deployed and they're all very uniform. Let's use that and do security and monitoring of those uniform services because we already define how they are going to look like as far as best practices. We're going to use that for security. And then the three points on the left side, right side, if you're sitting, then are the whole issue, which is a very known problem in security, which is a configuration problems, configuration drift. Every time you configure a system, even if you got it right the first time, three weeks later, someone changed the configuration. Now, why is it such a great thing to do Knative in that respect? Because most of your configuration are not done by humans anymore. Most of them are done by the automation. So the humans define the service, the automation does all the configuration in Kubernetes for you and set all the configuration right. So even if someone tried to change that, then you have continual repaving and it to be redone to the right configuration. So you don't have to worry anymore from security perspective for configuration drift and so on and misconfigurations. OK, so let's talk about the security use cases that we all face. And the first is that there is a basic assumption and very naive assumptions by all everyone that if you do everything right, then everything should be right. But that's not the reality with cyber. With cyber, even if you do everything right and you follow all the best practices, your microservices are still vulnerable. Well, if you create a microservice, you look at your team and you say, well, they are well trained, but maybe that's so. Maybe they followed all the practices. But then they use all those libraries. And you don't know the people who wrote those libraries. And then those libraries use other libraries and so on and so on and so on. And then it's not only that, you're using development systems. And you don't know all the people who developed those development systems and those extensions that you're using for your development systems. And your GitHub or your Git, sorry, your Git that you're using and the image repository and the CICD tools that you're using. And, well, you're using a lot, a lot of tools and back end services with your image. And all of these need to be perfectly secure, which is unrealistically by evidence, but what we see in actual deployments. So assuming your microservice is safe is a very bad practice. So the right practice for you is always assume your microservices are vulnerable. And that's like the normal situation. You have no idea what the vulnerabilities is, but you know they are vulnerable. So someone may use, may find those vulnerabilities, may use those vulnerabilities. You just have to keep an eye on them. You can't just deploy pods and assume they are safe. You have to keep an eye on them. You have to monitor them all the time. And everything that goes into your pods needs to be monitored. And you have to have those gates which will block anything which may explode your pod. You have to have those gates which will delete any point that you have, which may become misused. And of course, you have all these other use cases where you already know a bunch of CVs, but it takes, on average, two months to fix an average CV. So if that's the case, then you are running with CVs all the time, because you may have more than one. You're running with CVs all the time. And you need a way to deal with that. Maybe you also know about an exploit which is compatible with that CV. So you know exactly what you need to protect from, but you need to have that protection in front of every pod or every service that you have. If you put it at the perimeter, then it only protects against someone who comes through the perimeter. If he's already in, in another pod, somewhere, then maybe he doesn't protect you. So these are the use cases that all of us face all the time. And now let's use the fact that we have a very well-designed system with the microservices to monitor those services. In the old days, when we were using monolithics, then all the requests came to that monolithic entity. And that monolithic entity behaved in all different ways. For that request, it is that. For that request, it is the other thing. And we didn't see everything that's going on inside that monolithic application. But we are using microservices, right? So let's use that. We know that every microservice should have, if it's well-designed, it should have a very clear pattern of the requests coming in. So let's look at what is coming in and make sure that the pattern that we see conforms to what we expect to see. Let's look at the behavior of each and every microservice and see the behavior of that microservice, because after it's just code reacting to a very specific request. So we expect the behavior to be quite the same every time it processes a new request. So that approach, which is security behavior analytics, is implemented as part of a K-native extension called Security Guard, which you can get with your K-native deployment. Security Guard is embedded into the Q-proxy of K-native, which means that you don't have to deploy anything special. It is every request that you have already go through the Q-proxy. Now it's also being monitored. And also, we have the chance of blocking any request, which is off. We also sit very next to your user container. So at the same time, we are able to monitor certain aspects even without privileges. Even without having special privileges, we're able to monitor your user container. For example, any attempt of your user container to start approaching IPs, he's not supposed to approach. And we also have a machine learning component. That's part of K-native, which would learn the pattern. So you don't have to deal and create all those hundreds of rules for your hundreds of services and maintain them all the time. So you have a machine learning that helps you do that. And it creates a certain rule set. And if you have a CVE, you may decide, well, I want to change that rule set because I want to be more strict or I want to make sure that this rule set is fine for me and protect against this CVE and so on. OK, demo just in time. So we'll start by just deploying a K-native Kubernetes. We have the list of services, the list of revisions. And then going to kubectl, we go to get a number of pods. And we will deploy just to demonstrate how we deploy a new service very easily with a CLI. We'll just need to say the name of the service, the image, the environment variables. And we scale it for one at least. So it can be any number, but at least one. And we get the URL already for that service. And we can use it. And we can immediately access it. And it will respond. And we are fine. We deployed a service. So next, let's try to see what happened if we load that service. We will start 4,000 connections against that service. And then we will stop those connections. And we start them again. So we start increasing the load. And as you can see, the number of pods just increase to overcome the new load. And then when we stopped it, the number of pods will adjust. And we will start it again. And it will increase again. So OK, fine. That one works. So next thing is let's deploy a new revision. So the change that I'm going to do in the new revision in a second is I'm just going to change the environment variable, but I could change the image just as well. So we will give that new revision latest is 5, meaning I'm going to give it 5% of the traffic, which means I'm going to keep 95% for the older revision. And you see the number of pods. And the number of pods has changed. I have up there the list of revisions and the 5% and 95%. And the new pod for revision 2 was created. So let's play with it and change that to 50% and see that the system actually works. And yes, it changes the number of pods accordingly. And of course, at the end, I will change it to 100%. So we don't need revision 1 pods anymore. We have revision 2 completely deployed. And we may reach a case where we found after some time that revision 2 is not working as we expected. And we want to go back to revision 1. So as long as we have revision 1 still deployed on our system, on Knative, then all we have to do is tell Knative, OK, please give 100% of the traffic to revision 1. That's all we have to do right now. And the system will adjust. So it's all very nice. Let's go and see the security features, which is my pet project, so not surprising. So we'll start with a simple service. And we're going to use that service. And we assume that already the system was trained with that service, so all the rules are out there. And what we're going to do is start up with just sending out query string or a header that is not supposed to be there. And we are monitoring the log at the bottom. And we see an alert coming up. And if you also notice, we asked Knative to block the requests which are creating the alerts. So these are blocked. But it's not only new keys that are being monitored, but also the values and what's in the value. So it could be that the value is too long, for example, that the system already learned what is expected length of each value. Or it could be the actual content of that value. And I'll stop for a second. And what you see here, for example, is that it's detected that we were using a rounded brackets. And rounded brackets are being used in all sorts of attacks. And if that specific value of that specific service shouldn't have brackets, then let's block it. Let's not let that in. So that's what the system did. And the next thing is let's say that everything went sour. And we actually have an exploitable service. And someone is able to run a remote shell on that. And we will just do an exec as an example. So we will run an exec against our service in a second. And normally, when you have a remote shell, the first thing you're going to do is try to install something on that pod. So we will try to install, in that case, Wget doesn't really matter. And the system detects that there is now a connection to an IP that is not supposed to be addressed from this pod. And there is an alert. And there is a block. I mean, sorry, this pod was deleted, as you can see here. It was deleted by the system. And UPD came to replace it because that's done automatically by Kubernetes once you delete the pod. And the attacker is not very happy because everything that he did until now to get to that point is destroyed. So he needs to start over the attack. And it will just repeat. So he will go elsewhere. OK, so limitations, as we mentioned, it's an opinionated system. So you can't do everything with it. You can't deploy an SQL database with it. It suits your homegrown microservices. So what do you do? How do you handle those situations when you have something to deploy, which is not suitable for Knative? Well, you just use Kubernetes to do that. It's a mix and match. Well, it's a mix and match. You can use your Kubernetes for any service which doesn't fit the pattern and then use Knative for everything which fits the pattern. And normally, your homegrown microservices should fit the pattern. So I'll ask Max to. Yes, so if all of this wasn't enough for you in terms of why you should be using Knative, there's actually another reason. And it's a very important one. And it's the fact that Knative comes built with a eventing infrastructure. And this is important because if you can imagine a lot of your problems or the applications that you're building, they are not just a bunch of microservices. It's also which microservices you need to run, what event is going to trigger a execution of a service, et cetera, et cetera. And when you have to do this, you need it to scale. You need to have your events be, I guess, you need to keep them in a queue. You need to be able to consume them. So you need sanks and sources, et cetera. And the good thing is the designers of Knative, so this is give credit to Matt and the rest of the team now at ChainGuard, they built in Scott. They initially created Knative with an eventing infrastructure, so that comes in. And even better, you can plug in your own. So for instance, at IBM, we plug in MQ, and you can plug in Kafka for an open source version, et cetera. So everything else that David told you, there's also even more. But to summarize and conclude, if you think of different actors in a running system, whether it's Kubernetes or Knative, you can think of the developers, most of us here, operators, probably some of us that are operating the system, and then the users. So what are the benefits as a summary? For operators, it's more secure because you can deploy it with guard and add the rules that you need for the services that you know you're going to operate. You don't have to deal with scaling. It's going to do it for you. It's going to hopefully optimize. And it's going to help you reduce costs and reduce energy. So that's the advantage of operators. Whether you're operating at a large scale or a small scale, you'll get those benefits. For an end user, it definitely will help you scale nicely, maybe reduce cost. And certainly, it's going to be greener. So if that matters to you, this is something you can mention. And then finally, for developers, it's everything that David mentioned. So using, for instance, the CLI, it's super easy to start training your developers to use Knative. You can do blue-green canary deployment in a breeze. You don't have to deal with the Kubernetes objects underneath because it's doing it for you. You can scale, obviously, a lot better. And you can add a security layer that comes built in. So with that, let me stop and invite David back so we can get your questions. So any questions? I think you may have to yell because, yes. Yeah, go ahead. I don't know if there's a mic. So let's repeat the question. So I think you asked about comparing Dapper with Knative. Familiar with Dapper. Is it another serverless environment for Kubernetes? OK. I see. No, I'm not familiar with it. Yeah, I don't think we're going to make that comparison here. Any other questions? Yes. I'll repeat it. So good. From a developer environment, the guard burnt? Yeah. Into production, like how does that flow? So the question is about the guard rules. And can you put them into CICD, for example? And the answer is yes. Guard learned the rules. They are placed in a CRD object. So you can take that CRD object and you can run it through your CICD. You can read them from the cloud. You can write them to the cloud. You can define to guard whether it needs to continue learning or stop learning, whether it needs to use the learned rules or configured rules. So you have a lot of control there that you can use to do that. Another question, yes? So the question is, how do we detect the misuse bot? This is for the autoscaling? Yeah, so the autoscaling went through a lot of revisions. And the algorithm is pretty complex. But there is a part of the code that's actually monitoring the usage. And it's using that to compute it. Yeah, pretty much. But it's not heavily configurable, but there are some parameters that are exposed. So you can tune it a little bit. The thing that we did really well, I think, in Knative is that we've had a lot of test bed. So whatever you get out of Knative tends to be pretty good in its scales, meaning you could run it for very large systems and it will scale really well. But you can actually tune that, too, if you need.