 Okay, so when they asked me if you want to do the last talk, of course, it puts a little bit of pressure, so hopefully you come out of this with a couple things that are interesting to you. The other thing is that Evan had also submitted a talk about the future of serverless, so it was important for us not to overlap. And I think we don't overlap. And actually, his talk, I think, is probably better than this, but I hope this will leave you with at least an inspiration. I hope, at least one point. But if you want to understand the details, the technology, and so on, I really encourage you to re-watch Evan's talk at the beginning. It was the first talk of the conference. So let's get to it. Obviously, let's think Red Hat. It is part of IBM, but we really treat it as separate companies. I still don't have a way to look for Nina's email. I don't know if that's an indication of something else, but it's definitely separate companies. But take them. So I'm not thinking IBM. I'm thinking Red Hat. So the statement here is the future is serverless. And I think a lot of you here are probably bought into this, because otherwise, why would you be here? Or at least you're trying to learn. But I guess maybe the question is, is it the future of cloud computing? And then the other thing, more important, which I'm not asking, but I hope I answer, is why? Like, when you talk to a customer, why do you think serverless is the key? Like, why is it the future? And I think that's the point that I'm going to try to answer. I'm making a lot of assumptions. Feel free to ask me some questions at the end, and also complain or challenge me. So with that, let me ask you a question. Since I'm a photographer, I take it very seriously. This is a real photo of San Francisco. And there are three obvious landmarks of San Francisco. This is Coyt Tower. This is the famous Trans-America pyramid. And then this is a new building. Lots of nicknames, but you can call it the Salesforce Tower. So question to you if you've been to San Francisco, or you live in San Francisco. Let me kill this. Is which one of these three things are taller? Anybody? Okay, so somebody says it's the Trans-America. Anybody else? All right, so Moo says Salesforce. So the reality is that the Salesforce Tower is 1,000 feet. Trans-America is only 850 feet, and you can see Coyt Tower is 210. And this is a real photo. I did not fake this. And the reason this appears like this, if you're into photography, you understand there's a little bit of compression, there's a little bit of perspective. I happen to be in one spot in San Francisco where all three of these things appear almost next to each other. And with a long lens, I can make it look like that. And I can actually, if you know a little bit, you can actually change also the composition. And the reason I wanted to show this is because a lot of time it's about perspective. Always, always about perspective. So keep that in mind. Here's another picture. It's not gonna help. It's not gonna make some of my friends happy. I just saw my friend Nima on the back just to join. But the reality is when you leave instances running, and not just on AWS, by the way, in any cloud, you're spending a lot of money. I don't know if you're gonna be poor like him, but you're definitely spending money. The next thing is that I did a search on Google before coming here. What is the impact of energy consumption in the environment? And would reducing energy consumption help? And just trust me, it is definitely true. If you can reduce your energy consumption, it helps. Now, obviously if you reduce your energy consumption, that's using less carbon fuel, it's even better and so on. But overall, as a people, as a species, as a planet, if we reduce energy consumption, it should help. So keep also that in mind. So let's jump in. So the talk is structured in three parts, but with two parts each, so six. Motivation with some historical context, the vision and use cases, and then challenges and solutions. But again, as I mentioned, we can't really cover everything just because it's 20 minutes. But also I think people like Evan did a way better job than me of covering some of those aspects. So I tried to only cover some parts that he did not cover. Sorry. I started with a picture. This is a picture actually not from the IBM building where I work. And of course, if you understand the colors, then you understand why I put this picture. The other thing is this was taken with an iPhone 14. So you don't really need fancy cameras to take decent pictures. So the motivation I think is cloud computing is at the center of everything we're doing. So mobile edge, we saw lots of really good talks about this. But cloud computing right now is actually growing at 15%. That's probably part of why we're here, since Kubernetes is sort of the underpinning of a lot of cloud computing in these communities. And IT spending is over 50% now on cloud computing. So businesses like Amazon are growing like crazy and of course other companies as well. So that's important. But here's another part that's important too, is that it's consuming about 1 to 1.5% of global energy. And that is real. It's actually going to only grow. This quote is actually a little bit older because you don't compute these things until after two or three years. So it's going to keep growing. So if we continue at this pace, what's the impact on the environment and also what could we do? And I think for me, one of the things that is important is to put things in the historical context of where things are. If you look at evolution of computing from mainframe to where we are now, it's always been about trying to reduce cost, about increasing capacity and increasing computation, but at the same time reducing cost. So what can we do in some ways to continue this trend and what are all the new trends as well? So there's a picture of the Valley of the Gods. No, not Valley of the Gods. Exactly. So it's in Colorado Spring beautiful obviously and gives you this view of where things are. So let's try to paint a vision. So I already sold you or you're here because you understand serverless. And certainly with Knative as one example built on top of Kubernetes. But all of this while being true and we are here today, we're still not mainstream. We're not yet there. Meaning that probably most of the applications that are running today are not necessarily using Knative. If they're using Kubernetes, they use Kubernetes straight up. So what's the challenge? What could we motivate them on? What could we do? I don't have all the answers, but I think it's important for us to ask this question. And actually towards the end, I will also have a slide on the threats to everything I'm saying where I say potentially everything is not true at all. So let's be careful. So what could we do? Can we actually get to a point where we're more mainstream? I think the first thing is managing costs. I think anybody that says this is not true, let me just pull my BS card. Everybody here for anything that you're doing, if I told you I will save you 10%, you will raise your hand and it will come. Very few people will not want to spend less money. So it's very important that we sell Serverless that way, in my opinion. And the reason we can do this is because Serverless actually can potentially help when it comes to costs, both for the users and also for the cloud providers. Because as a cloud user, you're only paying for what you're using. Ideally, it's optimal. Meaning that you use only this amount of resource that's on me what you're paying for. The other important thing, obviously, is that as a cloud provider, you don't have to provision lots of resources. You don't need to over provision. You can provision just enough to meet your current users' demand. Now, obviously, it requires you to do something fancy like defragmentation like we're doing here. So if you look at this picture where you're running, let's say this is a rack with lots of nodes and the black nodes are the nodes that are hot, meaning that they actually have things running. The white nodes are kind of cold. Well, an optimal solution, like an optimal allocation would be this one, where only the black nodes are running on machines. Every other machines are not being used. So what you really need is to over commit only to provide a statistical guarantee. So you don't really need all those extra resources running. You only need the minimum. Now, of course, it requires you to do some work, but that's what it is. Now, you could say, well, yeah, but how much am I paying when I keep stuff running? A lot. I mean, there is... I don't have the slide here, but there is a slide where I show the amount of energy costs that cloud is costing, and it keeps on increasing. So when you're managing a cloud, that's the most amount of energy and cost that you're spending. So as a cloud provider, you'll save money. As a user, ideally, it will be optimal. Sorry. What are the environmental impact? For me, this is maybe the thing that I would ask you all to be thinking about is the fact that by simply reducing the amount of energy that cloud computing are using, we can actually help the environment. We can achieve to a point... We can run our applications and feeling that it's saving us money, but it's using less resources, which in the end costs less. And I didn't put this here, but there is a series of efforts around Kubernetes to be more sustainable, and that's exactly the idea, is to essentially get to a point where we're not over-provisioning for things that we don't need. We're not over-using energy. We're just using what we need. And I think serverless is perfectly placed to essentially drive this mission even more. This is the slide that shows how much energy we're spending over time in cloud, and you can see it keeps on increasing. And this is in, I think, terorots per hour. So what are some new use cases that we could move serverless towards? Because if we got to make it mainstream, it has to be beyond the simple request response, because right now we're not really doing those things with serverless. I mean, we're selling it, but people are not necessarily using it. So there are two use cases I want to highlight. The first one is data and AI, and I think one of the previous talks talked about how they're using data and AI in Kubeflow, KF serving, to essentially achieve, you know, same model learning, but in a sense in a serverless fashion. The second one is quantum computing. So, you know, we won't go into too much detail, but basically suffice it to say that one of the things that we're doing, for instance, at IBM, it's going to sound a little bit like a sales pitch, but there is a effort towards having quantum serverless. And the simplest way to think of it is these computers, so that's a picture of one, they use a lot of energy, but they solve problems, some problems, orders of magnitude faster than any classical computer. But in order for you to use them, you have to prepare it. So you have to set it up, you have to create the circuit that is actually going to be executing on the quantum computer. And all of that can be done in serverless fashion, and that's what we mean by serverless, quantum serverless. So here's a better picture of this. And basically you could think of it as as you run your stuff on a quantum computer, the things that we're going to do, like, for instance, creating the circuit from out of multiple pieces, putting it together, dealing with error correction. All of this will be done in a serverless fashion in classical computers. And this is the quote in the Journal of Applied Physics. So if you're interested in this paper, let me know, I have the PDF I can share with you. What are the challenges? So this is a picture of a bridge called Bigsy Bridge in Big Sur. And the reason I show it is because it's not only beautiful, but think about the engineering challenge to build this. And I believe they built this in the 1950s. So we can do it. First thing is optimization, because if you're going to build stuff with serverless, it has to appear like it's running just as fast. And one of the problems that we have right now is that the cold start is orders of magnitude more than what it should be. It should be in the 10 milliseconds, it's in the 10 of milliseconds. So my colleague Paul Schweiger on the back has a paper or a blog where he talks about where the cold start is being spent. So we've got to be able to reduce this. And what can we do? Another part is the design. How do you take your application and design it for serverless? Because when you do this, then you're able to better achieve the benefits of serverless. And the details of this, I think we talked about, but basically it's about understanding how to use the function work and Lens talked about how to decompose your application, how to exchange events between them and potentially how you do workflows. Security is also another aspect, because now that you have multiple pieces, how do you secure this and keeping it secure? And we have work on this that's going on. I'm moving a little bit fast, because I don't have a lot of time left. This final picture is can we achieve this Nirvana? Can we get to a point where actually serverless is helping us, but can we get to that point? This is a picture in San Francisco of the painted ladies, I think, but nice view of the city, and obviously everybody's in love, right? And some of the things that we're doing to achieve, to address the problems that we talked about specifically, are things like, for instance, freezing containers, faster probing in the underlying Kubernetes layers, things like, for instance, security guard, a new project that got started to essentially help security at runtime. Trying to get to better standards, so I'm part of this open cost group, where we're essentially trying to make sure that costing across cloud can be compared. The Kubernetes sustainability efforts are also part of it. And then in terms of allowing a flow of these different serverless components to make it a little bit easier, things like Tecton obviously expanded and then serverless workflow. But what are the threats of agility? So Nima just walked in and he sent me this tweet, I think a few weeks ago, a few days ago, sorry, by this guy that essentially says, I spoke to a team yesterday that essentially is moving away from serverless. And the reason he mentioned, and I kind of summarized part of it here, is that you build a monolith, a monolith is easier to build, easier to test, easier to manage, easier to evolve, and easier to secure. Because we know how to do it, everybody knows how to do it, it's a big pile of code, and we can do it. And of course, Kubernetes is also hard, which makes Knative and serverless even harder. And developer's time is expensive, but we can just waste resources. And I hope you believe that these threats, while real, I don't think any of them are actually a sign of fundamental issues with the way we're doing serverless. It's more of reality of teams, reality of what you have as your background before, reality of potentially the state of your organization in terms of the maturity level of your organization. So none of it is actually threats that should prevent you from trying serverless. But they are real, and they exist, and you need to be aware of them. So just to summarize, I want you to leave here thinking that when you sell serverless or you talk serverless to your team members, to your management team, that there is really two main advantages that we have to be clear on, is that it will achieve better cost, both the provider will be able to reduce, and then as a user, you'll be able to use those resources cheaper. And it's potentially very good for the environment. And why would you not want that? And why would you not want to advertise that as well? But there are lots of challenges, but we can solve those challenges, and we're actually working on those. So with that, let me mention that the goal is to achieve a serverless first world, right? How do we get there? And the last thing I want to do is to show you three pictures, because in case you're not believing in the impact to the environment, and how important it is to us, look at three pictures from the Bay Area, okay? This is one, this is another one, and this is another one. The reason I show this is because, obviously, there is an orange U in the Bay Area that happens at night. But one of these three pictures is not mine, and one of these three pictures is when we had this massive fire in 2020. And it's this one. You would think, but it's real. It's happening, and it's impacting all of us. So even us as computer scientists, we should be working towards helping the environment. So I really think that as a serverless community, we need to be thinking of not only costs, but also how do we help the environment? So with that, I want to thank you, and thank the community, and then Red Hat, and let you know how to reach me. Thank you.