 So I'm mark peak principle engineer with VMware and my co-presenter is I'm Doug Davis from IBM And we're gonna be going over the state of serverless So before we start, let's get some hands going up. How many people know what the serverless is? All right. How many people know what functions as a service is? All right, how many people know the difference between those two? Ah Okay, okay So what we're gonna do is talk a little bit about, you know, what is serverless or why I'm when to use it etc use cases and Then then we'll drop into Doug's gonna go over some of the CNCF serverless working group talk about that and some of the some of the futures That we're gonna be doing on that working group so Before we start in on serverless itself, let's first talk about functions as a service You know, one of the things that we've seen is the progression of Compute, you know started with bare metal servers went to virtual machines Obviously, you know me coming from VMware. That was important thing. In fact, I think it's pretty funny that You know IBM and VMware are talking about serverless, but I digress And so, you know as we as we move from virtual machines you public cloud and then containers took us by storm a couple of years ago and You know, it's a wonderful thing. But as people have looked at how they want to Write their programs how they want to consume compute They've thought, you know, I really just want to focus on my business logic I want to write my function and I just wanted to execute And so that's one of the reasons that functions as a service is really interesting to people just You know be able to to write the code submitted the infrastructure is already there Have wonderfully fast startup times When you look at density packing you can you can pack more Functions in into the same amount of compute and in be able to schedule those across a lot of different nodes that way The other part of it is that as you're looking at functions of the services way to Break up your monolith you can now write those functions be able to have things that are very directed for the API usage that you need So, you know going along this, you know, what is it that we want to see in this function? Well, it's event-driven what we want is a event coming in some way to have an action occur That will run this function It will run this function. It may do some some back-end work It may send a send an event back out, but you know, it definitely will Do some work there, but it's also very short duration the the current cloud providers very fast execution times I think right now AWS is around five minutes, but you know, it's not intended for Long-running workloads. You might be better off using Containers or VMs instances for for those types of things, but as as we see Some of the the open-source projects coming along I definitely could see that as being one of the areas where where they can provide additional value is you can choose your own Duration you can increase that and be able to control your own destiny there It's definitely stateless that you you just all it is doing is providing the the compute You don't want to use any storage associated with that Function runtime You will rely on external services in order to store the state out into back-end services or Another way of storing state is sending another event back out to be You know additional functions or to trigger other other events to happen I'd also say that one of the key areas is autoscaling because What you want is as these events are coming in the infrastructure will respond to That workload coming in and will then autoscale out the number of functions that you need in order to handle the ingest of those events into the infrastructure and Then you know public cloud is talking a lot about the lower cost Which is you're only having to pay for what you use when a function runs you get get Time billed to that. That's that's your bill So you don't you know this this kind of dovetails into more of the serverless thing But it's it's definitely a cost savings associated with Public cloud only having to pay for what's actually running So what is what does this function look like? Here's a very simple example of a function. It's a JavaScript has parameters coming in That's it's being triggered by an event Let's just say that it's an HTTP event. There there's Parameters associated with that call one of them is a name We look at the name and we say all right Hello world or hello name if there's a name associated with it very simple The the Jason payload is sent back out as a response So when you're looking at how functions can work for just redoing your API's This is a very good simple example of that And you know it really is this concept of being able to Control the actions running through your infrastructure and and being able to chain together those functions So what does this look like? Overall is you have the event sources coming in from a variety of different areas, you know it could be computers could be web browsers IOT Phones, you know all kinds of different devices will be Contacting into your infrastructure Actions occur, you know a lot of people who will say actions triggers events, but it's all Somewhat the same thing you'll then go into the function executor and scheduler the event will be scheduled on to more than likely a container where the function is running, but there are other implementations that might have something other than a container abstraction for that and Then it will provide the compute network and storage associated with that for the function execution More than likely as as you look at this it It will use back-end services. So things like event buses blob stores other databases in order to Either query information store information, but it isn't it doesn't necessarily need those back-end services It can't be just standalone Do some compute send the answer back out as an event, but it is this ecosystem that we're seeing One way to think about that is that as you're developing your overall cloud native application You need to look at these components as design patterns that you use in order to create that application You will use VMs and instances containers functions as a service and the services themselves in order to combine those and Create your cloud native application And you know, I think what the important thing there is that as we move forward into the future as we see That the whole serverless and functions as a service expand what we want to see is more portability of How can I write my function and then be able to now deploy it onto many other clouds? And I and that's going to be kind of a nirvana going forward So now We're going to take a step back to serverless, you know the stock was about the state of serverless so I talked about functions as a service and Now what is the real distinction there? I'd say that this is coming more from a public cloud point of view where Functions as a service is how you actually run the functions how you execute them etc serverless on the other hand is the thought that I can just hand this function off to some Infrastructure someone else is maintaining it someone else is operating it. I don't have to care about it And and so they will figure out how to auto-scale it for you They will say, you know, if it's not running it doesn't cost you anything and I think that's a very important point about this, but it's also In the eye of the persona so you know one person the developer persona It's always going to be serverless to them to the IT ops, you know, they might be providing a functions as a service infrastructure So Amazon Google, etc. Public Cloud vendors. They have their function services They're providing functions as a service We use it as serverless. So that's another way to think about that distinction between between those two and then as we as we look at all of these, you know as a service we have Functions of service platform as service container as a service and it all starts in You know Doug and I were talking about this and it's all starting to blend together Which is it there are subtle distinctions between all of these different services and it's really, you know How do you what's what's the smallest amount of code that it's executing that needs in order to run and how you interact with it But but I do see that there is this blurring of lines where you say well And I've had this conversation with a lot of people they say well functions service Isn't that a pass, you know because I just kind of submit my code to it, etc. And I'd say yes But there's probably a finer granularity on the function that you're actually submitting to it There's Differences in terms of how it gets auto-scaled, you know paths may not do the auto scaling for you You might have to have more manual steps in order to bring more The number of instances up That isn't done automatically for you. So there's very subtle distinctions there another Distinction that I want to make is that you can decompose functions Apart from your application. So a lot of times you'll create your application. You'll have your your gets your puts different different routes all in the same set of code and With functions of service you more than likely will be decoupling those creating individual functions for Each of each of each of those routes each of those endpoints in which case then there is a distinction there where I can now upgrade or modify a Endpoint independent of having to redeploy the entire application So it's I'd say that's another area to look at as to you know difference between serverless fads and some of the Platform as service and container as a service So going over some of the you know these this is just a sample of some of the use cases that I've seen for using Serverless or functions as a service You know obviously I talked a lot about you know, how do you decompose for microservices being able to have APIs there? IoT is a is a is a great use case where you have a lot of devices that may be Calling in you can't predict when that load is going to happen. So you really want the infrastructure to autoscale that for you And you can have very discrete functions to handle handle those inputs batch and stream processing This is more of a using functions to orchestrate the data being that's flowing between all of your batch and stream processing and then there there's obviously DevOps your CICD pipelines you can you can have triggers fire IT ops I can imagine being able to to see see issues occurring and be able to be able to I now say all right if I see this alert I want to send the slack message to this channel if I see this other alert You know, maybe maybe I'm going to call out to pager-duty and start waking some people up Let me also touch on a couple of use cases that I've seen This is this is more of an AWS use case, but I think it's pertinent which is there's it was reinvented in 2016 coca-cola was talking about their infrastructure where When you swipe your credit card to a coke machine that would trigger a lambda function that would then do the Charge on to onto your credit card. So they you they are only paying for Compute when you actually are paying for a coke And I think that that's a that's a really good point about why you would want to use serverless In in some of your products, but I will also say that if you go back and look at that video They also talked about that there's a break-even point where if they have a low number of transactions It makes sense to have that as using lambda But as the number of transactions increases it actually makes more sense to move over to dedicated Instances, so you really have to have to look at your use cases look at and model the costs associated with something like that To decide whether going for a functions as a service really makes sense from a from a costing side side of it In and then at serverless comp.io I think that was in May There there was a talk from Nordstrom about they have a It's an in github fully open-source Hello, Retails site that's Written all-in lambda, but of course if you if you dig deeper into that it The functions are written there, but they're using a ton of back-end services And I think that that's where you have to look at again your overall costing model What is the cost to run the functions is one thing How much is are the associated services to store the state to do the eventing everything else You have to look at that entire bill as you're trading off how you do infrastructure With this and with that I'm going to hand off to Doug All right, can you guys hear me? Okay? All right, cool. All right, so I'm going to talk about the CNCF working group Excuse me So even though a lot of people's hands went up It when they mark asked about you know who knew about serverless and function and stuff It is still a relatively new technology and back in 2000 back in 2019 back in June of 2017 The technical oversight committee in the CNCF decided, you know, there's newer technologies out there Let's figure out what if anything CNCF should do about this space And so obviously the first step in that process is to find out exactly, you know What is serverless? What's going on in the community today and then see if there's anything we want to do in that space So they basically said okay, let's start up a working group and explore these various things so One of the outputs of the working group was a white paper and it's as you might expect is first This goal was to actually describe what is serverless? What is functions and what's going on the community today to find some common terminology because each platform may have a slightly different term for things and What for the most part of the working group itself was actually pretty boring for the most part because Everybody kind of was in agreement for most things and we just had to sort of document what was out there And that was fairly straightforward but there were a couple of Maybe contentious points in there and some of these are things that Mark sort of touched on when he was summarizing the stuff and one Of them was actually zero cost when idle And for those of you who understand the ways things like Kubernetes or Docker actually works, right? It's very hard to actually get The scale of your application down to zero so it's actually zero cost if you're not using it right the infrastructure isn't designed to work that way and a lot of these cases you can go down to one and Then it can scale up as needed, but to go beyond one down to zero so you actually get real zero cost That's a bit of a problem for some infrastructures And so when we just tried to draw a firm line in the sand that said absolute zero cost That was a bit of a sore point for some people because some infrastructure can't necessarily get there And I'll talk a minute about how we solve that and then we have this whole public versus private thing Which is again mark did touch on this because again serverless is supposed to be about you hand us your function We'll host it and you don't have to worry about anything else and that's great if you're in a public cloud But in a private cloud well, obviously you're not giving the infrastructure off to someone else outside the company to manage it You're giving off maybe to someone else inside your company like an IT shop And so there was a bit of a contentious point there because some people said well You can't have serverless in private cloud It doesn't make any sense because you as the organization or the company still have to own it manage it pay for it So that can't possibly be serverless Well in order to solve these we basically did what mark hinted to which is sort of define personas or roles, right? We said, okay, you need to look at this problem from two different perspectives There is the provider who's providing you the infrastructure the functions of service infrastructure Versus the developer who just wants to hand over their code and once you look at it from that perspective You then you can say okay, then it does make sense, right? Maybe I can then do this in private cloud because my IT shop will be the ones who absorb the cost But from the developer perspective, they don't have to worry about the cost anymore They don't have to worry about hosting it and managing and stuff like that So you can do this split within your organization and still get serverless But it's only serverless from developers perspective not from the IT shop or providers perspective And so that then led to this notion of well There's serverless versus serverless technology right serverless is what the developer sees Serverless technology is what the provider or the IT shop is going to manage And that's sort of how we sort of dance that fine line to not really upset some people who didn't want to be Not labeled serverless even though they didn't necessarily fit into some other people's perceived definition of serverless But once you get past the the definition of what is functions what is serverless as I said most of it went fairly smoothly So we highlighted all the things basically mark talked about it in his part of the presentation, right? The use cases areas where it's proven value How do you differentiate serverless from past from containers of the service or even VMs all those things are just talked about there And we also went into a little bit of detail of how serverless works from a technology perspective under the covers now We didn't for example take lambda or open with a screen like that and say here's how this does it rather We sort of looked at it in abstract sense and said what are the general things you'll see in the Infrastructure of a function of the service or serverless infrastructure just so as you're looking through these various options out there You're not surprised by some of the terms or some of the concepts that are presented and Finally in the white paper we then talked about well What should the CNCF do going forward with serverless? Okay, and I'll talk a little bit more about that in a sec All right, so the other output that we had though aside from the white paper was basically a spreadsheet of what is the serverless landscape, right? What are the? Cloud provider serverless Offerings or the open source projects out there What are the development tools that are available for serverless or functions and what kind of back-end services are people? Making use of today now this isn't meant to be things that are approved or you know get a gold star You know that we like or dislike this is just what's out there today So if there's something missing from the spreadsheet that you'd like to get added It's a Google doc right now. Just make it as a comment. You'll get added We're not there to draw a bar or a line in the sand to just allow anybody All right, this is just so people can go to one particular place and say oh, what's out there today? So they can go play with it All right now to the recommendations itself Obviously, we need to maintain that spreadsheet going forward as I just said so feel free to suggest edits there But going forward we want to try to enable developer interop and portability And in order to do that you need to make sure that you can move your function from one platform to the next now there are many different aspects around portability of the functions and so rather than trying to boil the ocean all At once we decided well, let's focus first on events Can we get a a? Harmonized or agreed upon or interoperable format for these events right that might make it a little bit easier for you to Make your functions portable not 100% portable, but a step in the right direction now The other one we talked about was things like the function definition right the packaging of it how you actually deploy it That's all goodness too that hopefully will be the complete picture for moving your functions around But event and then format was sort of the first one. We decided we're going to tackle I'll talk a little bit more about that in a sec and then finally Any additional documentation we can think of going forward right are there are there is there? Are the documents we could put forward to help people understand? More about the community more what's going on in this space The integration with other CNCF projects just additional things just to help out the community going forward Those are things we're going to look at producing in the future So let's talk a little bit more about events as I said It's basically looking at trying to see if we can find a common sort of envelope for these events that come into the system now We're not going to necessarily look to mandate all events For example must be an adjacent format or stuff like that right because that's going to be difficult We have a lot of existing systems They're going to send out events in particular format some in Jason some Yammel some binary XML whatever We're not necessarily going to try to force those people to change But if we can maybe perhaps look at the wrapping envelope around that that go into the function Maybe to get a little interoperability around that is what we're going to be looking for and you'll this is actually very similar What we see happening with CNI and CSI other projects within the CNCF Foundation and looking to get that harmonization kind of at a perspective And it's important to note that events isn't actually just for functions or serverless We actually do see events flowing into all types of applications So we're hoping this will actually get leveraged by those other use cases as well not just functions now as a starting point We actually had several different options available to us And I crossed out the two at the bottom because yet yesterday We actually had a face-to-face meeting to figure out how we're going to start right We're going to start with a clean sheet of paper take an existing specification You know, what are we going to do and we kind of landed on this open events specification That was originally started off by Austin from serverless, Inc Who took to the initiative to go out there and talk to a bunch of different companies to try to see if there's some Commonality that he can actually formalize in document form and he came up with open events We thought okay, that's good starting point. So that's what we're going to start with now That's not to say that we're not going to look at for example cad F from CNCF or the Cloud native event mapping document and and use those as input but in terms of a starting point We have to start someplace and we're going to start with open events now This is open to everybody so if you are interested in looking at some of this Harmonization or it off-bill the around events, please come join us It is an open working group to everybody You don't have to be a member of the CNCF to join in the fun and we're welcome you to join us So here are just some important links for you links to the working group itself We do have weekly calls right now at 8 a.m. Pacific time Links to the white paper and that landscape Google doc if you're interested and I believe that's basically it's now So thank you, but we do have other serverless tracks in this very room right after here It's want to summarize what they are So, you know if you're interested in service obviously please stick around for those other tracks and with that I believe we have about 10 minutes for questions if people have any Of course the pay I believe the PDF is available that very mark you uploaded it Yes, the PDF is available from our little thing on the shed or we scared whatever to say Sit sitting in oh, I'm sorry weekly Thursdays. I'll fix that Okay, come on. It should be obvious Yes Yeah, so I'll take a little bit of that so yeah, oh Yeah, so we can close you couldn't hit the question is is there a performance impact basically? Dealing with these small little bits of functions as opposed to entire application and spinning them up and down for example going down I assume going out like zero Instances, you know the cost of spitting up that very first instance could be a lot right and I think that is a problem And I depending on the infrastructure they deal with it different ways right for example Some of the cloud providers will allow you to set up But basically like a ping that will ping your your function every now and then so we always so we always have at least one Running or at least for a period of time and then maybe after 30 minutes to an hour Even that will drop down to zero so then there will be a Performance hit for that first one to come back up, but at least for that third 30 minutes or so you shouldn't get a Performance set there are other infrastructures that try to deal with that a different way for example Open-whisk will actually pause your container so that the startup time is very very fast It's not starting from cold boot in essence right and that's how they address it So there are lots of different ways that people are trying to work on that issue and it's so but it is definitely something to think about Anything You want to switch back to the Schedule there we go. Yeah, so in case you were expecting a demo from us We weren't going to give you demo, but if you do want to see It functions as a service in action. There's quite a few Projects up here that are open source that will I'm sure be giving some really good demos other questions so you talk a lot about the orchestration and how you're trying to get the Definition I think for the events and stuff like that come in the function Are you guys involved with other groups? Thinking like metrics tracing and all that work that could come into place and have an impact on Yes, yeah, we're trying to get some of those groups involved. Is it open metrics? Is that the right name? I can't remember mark is that yeah, yeah We're trying to get people like open metrics involved in the working group as well Yeah So if there's something or there's somebody you're aware of that we're not talking to as you look at the working group thing Please let us know because we want to reach out to as many people as possible. Yes So I wanted to dig a little bit more into this kind of startup issue because it seems to me like there's there's two possibilities either One you're you're coming from totally cold start and you need to spin up and that has to take at least a few hundred milliseconds, right? Or you're already running in which case What is the advantage of serverless at all because you're not really serverless if it was already running? So how can you how can you ever win? Life is I think I'll answer that with you can do Predictive analysis which is as you start seeing more and more events coming in you can start spinning up more and more In anticipation of the load coming in and then scale it back down Yeah So to me I had a similar question my first came across serverless I thought of serverless versus past very very similar right you give all your code they'll manage it for you That's great But look at this the the first second boat the first indented bullet there when someone started talking about scaling your gets as opposed to your entire Application that was the killer app or killer scenario to me that said that's cool Right because I can now scale up a get which is relatively lightweight talk to a back-end system return data very quickly I could scale that out to a million But only have maybe a hundred instances of update right that to me means I can scale that up faster and have more of those Then my entire application Scale so it's all about better resource utilization, but at a very very high level conceptually I do tend to agree with you, but it does require a decomposed application Right that that to me one of the big points of it. Yes a question about security focus of the cube con has been serverless mesh, which is about authorizing and securing the traffic between microservices is there projects in the serverless the function space that are Trying to secure traffic and authenticate functions to each other I'm not aware anything specific for service name mark You know, but I would assume that in most cases we're dealing with containers here They're running the functions anyway, so as is to you know, you can is to eyes if that's a phrase your function Container the exact same way because to the infrastructure it's still just a container So you can put that sidecar in there to get the is to benefits real as whether it's a function or a full-fledged application I can't imagine that be any different I'd say inside of the public I'd say with respect to the public cloud, you know, they definitely they have a You know I am for AWS being able to provide some level of authentication, etc but on on open source projects I haven't quite seen that yet, but I think that that's going to be you know kind of the next wave of things and I know of some Projects they're looking at how do you do? better multi tenancy and and other security across the across your entire FAZ Implementation, so, you know, I'd say stay tuned And you know, this is what we keep using this term. It's early days and and so I think it's going to be moving very fast So I'm always curious how technologies develop and where they come from Was this somebody working on a pass that said, you know, there's a certain use case that I want to fine-tune it And as they got going said wow, this is actually something we can cleave off and create as a separate Model of compute or was it somebody's PhD paper that they then thought well, there's a real-world application and brought it back. I Actually can't answer that one. It's I'm going to try to guess mark. Do you know I'm not sure that the earliest instance of using serverless I think came from iron IO quite some years ago, and I think that they were seeing just Going from a pass to Kaz to You know just being able to submit a function and That's the only history that I've been able to dig out of that and then and then obviously AWS came out with lambda that is Doing something similar Yeah, when we think about cost Savings of serverless, I guess it's great. We think about infrastructure But the real benefit that I've seen or heard talked about is really the benefit to the development workflow and how that's optimized and The biggest cost of any application in my experience is actually the time to build these things And so if you can optimize that that's that's really great What I'm curious of my question is where's the the state of like debugging and tracing and incremental rollout and staging environments versus production environments and kind of where is the CNCF? feel like its boundaries I Touching those errors You want to take that one marker? Because to me it's all just containers right and so that I would assume all the same tooling for logging and tracing Still apply here in the same way I would hope and that most of the differences here is the The mech that the tooling you used to actually either develop your application or function And the tools to deploy it but beyond that once again at the infrastructure level I would hope most of it should be fairly common In my opinion, I think that that's kind of green field Opportunities right now to have things that make it easier to debug to to actually step through the code Understand what the failure modes could be even unit testing those functions, which are written assuming certain infrastructure I think is challenging as well. So Definitely, I'd like to see people think about that and come up with good good opportunities for that So I think there's way the gentleman back there. I had to end up first as I Don't know if you could quickly go back to the slide about serverless versus functions I think it was earlier. So the way you described it. It sounded like you thought of Serverless as functions plus additional Functionality it was back a little more That one yes, okay, so Was I correct in interpreting this as you think of serverless as functions plus kind of this not worrying about the server? Because I would think of serverless as kind of more general than just functions like you know If you look at like ACI or Fargate, that's kind of like a serverless container product you get App Engine It's kind of serverless source to Deployment system. So I was just curious on your your view of whether you think of serverless as purely a function System with that server like serverless property or you think of serverless more generally my answer to that is that serverless could be things other than functions as a service because Really when you're looking at it, it's a it's more of the managed service Back-end as a service any of the services that you're using Along with functions as a service or it could be considered serverless And I think that if you take kind of this definition of it where you're not having to pay money for when it's idle, etc that is That it encompasses all of those types of opportunities I think that we're out of time with that one. Oh One more I was I was just going to amplify one thing on the serverless part Serverless means not having to provision or think about your instances. So in the sense of you know hosted database like The Google Spanner could be thought of serverless if you're not having to provision specific spanner instances And it just scales when you need it And the problem I'm running through though is that when you when you talk about serverless as strictly that that abstract level Then you get back to the question the gentleman over there asks is what's the difference with that and pass? Is it really just scaled down to zero or is it more than that and that's where it gets a little fuzzy? True pass could be service I I agree But it depends your perspective and you'll get you know those are fighting words to some people so All right great. Thank you appreciate everyone coming