 Hey guys, good afternoon. Thanks for coming up. So we're going to be speaking about serverless and function as a service in OpenStack. So originally, we had this with a fourth panelist. He's not here. He's got other engagements. But we'll do what we can for this. So one quick reminder, as we speak, if you have questions, just wait until the end. We don't want to break the flow type of thing. So if you have questions, wait until the end. We'll have plenty of QA time at the end and then argue, discuss, whatever we need to do. So with that, why don't you start the questions and then we'll start. You want to introduce yourself? Oh, yes, very important. They asked me to introduce myself. So my name is Ruben Reduz. I'm a Solutions Architect, and I work for Heptio. My name is Miles Sinhauser. I'm with IBM Cloud as part of the Blue Box Cloud acquisition a couple of years ago. My name is Tyler Britton. I work for Red Hat, obviously. You're missing your fedora. I left that at home. OK. So first and foremost, the important question is, let's cover our basis. Let's cover what exactly is serverless. This is, there's a lot of buzzwords. There's a lot of Twitter people with lots of opinions. So what is it exactly serverless? Someone else's operator crying at the console? Yeah, I think the kind of key, the name is terrible from the standpoint of, well, any place where you're not dealing with the operating system is serverless. But I think in this case, we're talking more of the function as a service model where you're saying, I'm giving you code, and you run it, and it's generally event or schedule-driven. Well, and also, don't forget about back end as a service. So you've both the function as a service, back end as a service, and then people are trying to somehow fit software as a service in there, which doesn't make sense. But it is basically functions as a service and back end as a service is kind of what falls under the serverless umbrella right now. So as you were saying, speaking of these definitions, it seems to me, so how is this different from, let's say, platform as a service, like Heroku or App Engine, et cetera? I mean, I think the biggest way to split it out is between short running processes and long running processes. So in traditional paths, you have a, whether it's a dyno, an instance, an app, whatever you call it, it's constantly running. So you push it, it runs, and it sits there. It may get put to sleep, but it's just sitting there waiting for activity, but it's running the whole time. It's generally in memory. The difference here is it's generally totally cold. There may be some caching involved for performance reasons, but in general, the code isn't running anywhere until something triggers it, whether it's an event or a schedule or something like that. He said everything. He's right. I can't argue with that. OK. So all right. So with that definition then, that you've given, so what is, I would say, the most, now that we know at least what our opinion is in the matter, what is the wrongest current marketing thing about serverless? I'm all against the, like I said, software as a service people, people that have been running databases for years that are trying to get into the space as back end as service. You look at companies like Firebase that was acquired by Google. That's absolutely incredibly interesting technology that's perfectly applied in the serverless context. But others that are doing hosted just databases, that doesn't mean it sort of fits, but not really. You kind of have to compare to what was done by parse.io a couple of years back versus what's being done now with Lambda and Azure functions, Google Cloud Functions, and then trying to correlate and see the differences between the software as a service, what parse was doing, and hosted functions to now. Yeah, I think the same thing we saw with Cloud where it was like, well now everything's Cloud, so nothing's Cloud. I think you see that with serverless where it's like, well technically our thing doesn't, they don't manage the server, so we're going to call it serverless so we can get some more search engine optimization and sell some ads and things like that where it's, and that's why like, just like we refined Cloud into IaaS and PaaS and got more specific and started putting like specific definitions, like DNS definition stuff, I think that's starting to happen here, and that's why you're hearing people start to call it function as a service, back as a service, to be more specific about what they're talking about. It's not just a managed offering, and that's kind of what we've seen historically is trying to get into the space. Okay, so how about this current advice going out there saying like, well if you're going to start an app today, you should start with PaaS, or you should architect for PaaS, would you agree or disagree with that? I would really say PaaS is just another tool in the toolbox, like we always have had and will have mixed architectures and finding the right tool for the right job is more challenging now than ever before, but you also get that payoff when you do find it because they exist. You know, if you look at actually using functions alongside containers for those long running processes or like the high data intensive processing, that you still want to get the deployment artifacts and like that ease of deployment, but you don't need to or want to manage the entire VM life cycle or bare metal. Like these can coexist along with each other in the same architecture. Yeah, I mean I think if you, I usually tell people when you break down your application and you explain what your application does as you walk through like a workflow, if you say something, well it waits for this and then that's like, hey that might be a good place where you could use a function as a service where it's like, it's cold, it's a cold function until something happens in the long running processing that needs that, whether it's some sort of processing of an incoming piece of data or a file or something like that. And that also today runs a short period of time. So I think that's another area where we're seeing some kind of overlap is what traditional would be like batch jobs. Like oh well can I do that in like Lambda? It's like well, Lambda can only run for five minutes so that's a two hour batch job that won't run in Lambda. So then it's like there's some overlap there right now to see is there a version, a new batch that's more serverless like from a management perspective? Well serverless like or can you break it into another backend service or use Lambda to essentially break up your batch sizes? You know push it on some type of stream and then break into functions that way. It kind of forces you to approach the problem just from a different perspective. Yeah I think the challenge is you see a lot of functions now being used because they're so easy and lightweight and they don't technically cost you anything unless they're running as this like little pieces of glue all over the place where it gets function sprawl now, right? Where it's like you see this with Amazon if you're not a fan of CloudWatch which I'm not a fan of CloudWatch, it'll say well I'm gonna use my own logging and it's like well it doesn't, well what actually we can do is you can have a Lambda which takes the logs and then we put a Lambda over here which then it exports it and then you have all these little helper functions which can be super useful but at the same time then how do you manage all these? Yeah that's the key point that I keep seeing that I was at serverless conference two weeks ago and it was interesting having discussions with people around the fact like architecture is such a critical component of serverless systems now like more so than before not necessarily everybody needs to be an architect but you have to understand the context that you're building in greater than you ever had historically compared to like monoliths and even microservices that we've gone into how you actually use these glue pieces together and how everything kind of translates and fits that's critical. Yeah I mean if you think about in a monolith there's no concept of I made a function call to other code and it didn't run. I mean it may have, something may have gone wrong during it there was some sort of problem which you usually have those exception handling but the concept of like literally I call this function and just nothing happened doesn't exist but it can exist in a serverless environment because you're calling an external thing and for whatever reason it may not run so then you have to build that into your architecture say like well what happens if it doesn't run what happens if it runs too long do I kill it like some of the handling of that as well has to be into your architecture because you now have all these new, you introduce a whole bunch of new failure modes that you need to deal with as well. Yeah if you thought we were building distributed systems before welcome this is the new playground. All right so I guess now to make it more relevant more topical to this conference is so where does this fit OpenStack here? I mean like do we want OpenStack to have some offer some sort of service or to have some sort of project that will handle a function as a service or serverless? The short answer is no, the longer answer is no. I think we've as we heard in the keynotes most yesterday and today more projects are not more good so I don't think it needs to be part of OpenStack but just using the core bits of OpenStack as that platform to run it's like you saw Kubernetes running on OpenStack this morning well some of the open source serverless things run on Kubernetes so you may have a Kubernetes cluster on your OpenStack environment and then that's where you're gonna run serverless or you may have something like OpenWisk which you can run in VMs and you have that you deploy that on top of OpenStack and I think the big advantage there is you have the tooling to be able to scale that too so you can depending on what you're using it's an OpenStack API call to add some more machines versus like hey I have to deal how am I dealing with bare metal or all those other type of things. Yeah you're coming from an operator that deployed OpenStack on bare metal I absolutely caution and recommend against anybody else going down that path unless you really enjoy pain. That's kind of where I see this aspect of you get to take advantage of OpenStack and everything that all the essentially the advantages and advancements we've made in the last five years because OpenStack today is incredible compared to OpenStack you know Diablo and so with that in mind you get to see OpenWisk deployed in what's actually capable and possible as you said with scale out as with management that's interesting. Yeah I mean I think that's one of the things I find interesting whether it's specifically serverless even with some of the containers of the Kubernetes people say well let's just run it all on bare metal I think most of the people have been been using virtualization and cloud and OpenStack for too long they forgot like bare metal is a challenge to manage and there's a whole separate set of tooling for that versus like hey today you could just make an API call spend some machines and kill those machines you know whatever you need to do I think that's the piece that's still super powerful from an OpenStack perspective I don't think it has there's no additional value for it to be a first class OpenStack project as long as it consumes the APIs and runs you know what how does that make it any better what how does that improve the experience of the user or the operator by having it be I don't see any reason that that would improve that. Yeah TLDR basically leave it to the ecosystem don't actually make it part of the project. So like you know we spoke briefly about you know like in the terms of the story of several less miles away that you have for the back end processes you have containers or you can have something else for the back end processes so like I just want to see how else do you see fitting in the story you know like by that I mean you know we have projects now like where they were trying to do like containers as a service so like how would that tie in into that world? I don't see the difference like it seems like it just fine. Okay yeah I mean I think that that's where the whole short running process long running process kind of thing goes where your long running processes may be VMs or they may be better off as containers and then supporting you know serverless to be able to call out these individual functions at scale and things like that and I think the you know depending on how it's deployed they may even be on that same container system so you know if you're the serverless tool or project you're using may run on a container system because I mean that's basically what they are generally it's like right it's there's no such thing as serverless it's someone else's container you know it's spinning up a container it's running it and it's throwing it away. It's a short very short-lived container but it's containers too. Okay. Yeah we won't get into technical details about because you and I would argue with you much. All right so this is a little more than I guess philosophical so like in the sense of so one of the biggest advantage of serverless and function as a service is their consumption model right it is attractive to many people you know it's like a way to save money for certain workflows et cetera so like and we at me and you we had a conversation earlier and then it was basically you know like open section should be completely agnostic to that or should or the question is should somehow provide hooks for billing or tracking of that functionality. Yeah I mean I think that's to me this is where I relate it back to cloud again originally to where yes there was specific technologies like OpenStack like stuff that AWS has built that enable cloud but it was 20% technology, 80% pricing billing charge back consumption model I feel like that's here too whereas you know a lot of the big self if you're using AWS using Lambda is well I don't pay if it doesn't run so I think that's something versus if I have a whole bunch of containers running in AWS and they're sitting there cold or they're VMs and they're running the meter's running so I think that's a big driver to it are there interesting new technologies they've had to build and stuff to enable that and things like OpenWisk to enable that but I think that's a piece of it so I think it's a crucial part especially if you're gonna run it yourselves whether you're a service provider or even just internal service provider well if cool we're running one of the many serverless projects how are you capturing usage are you charging back like if you're not then what do you even care that much should you just run containers and not do serverless internally? Yep but I would still argue against that like it's still focused on running serverless internally because it's not just the economic story is very interesting but also from like a technical perspective as a developer and like running a platform as a service in operations you get a very interesting deployment model with functions of the service yes the testing and debugging story is very difficult at this time but it will get easier just as VMs and containers have and so from a deployment perspective that's very interesting and very attractive to me compared to running config management at all or even like keeping a Docker file around building a container every time you know you take that that your grunt in excuse me words you take the package your zip file just that you push it up to your API and deploys it out and handles all that deployment for you that's extremely attractive to me into my operations team, my developers because that means I get to teach them one very simple process that can be automated very easily within our CI CD pipeline that was more difficult in containers significantly more so with VM images. Yeah I think though is if you're say today whatever tool you're using it's basically let's say in the case it's building Docker images and that's the sort of like those to me those tools are there for either use case or even if you're building a regular container app today now there's a tool to say like you know there's a number that can leverage some of them leverage the Heroku build pack some of them like Red Hat is one called S2I that does the thing that basically what does it take the code and then build the image for you I think that's applicable on either side more so than consumption. Okay all right so again I told you you have plenty of time for QA do you have any questions any concerns? I may have come to the wrong talk but I'm missing the sort of zoomed out view of what the hell function as a service is like can you give me an example of something that would be a function and how does it deploy and where does it run and... Sure, sure. And then turn the clock back 30 minutes so that I can see the rest of the talk in the conversation. I think a common example is a file manipulation of some sort so for example you have an application that plugs into your expense system and people can upload all sorts of images of their receipts it could be from their phone PDF scanner they come in all different sizes shapes whatever and you want to have them in a common size and resolution everything so you can have a function that basically takes that file as an input processes it and outputs the standard size format so that function sits there not running let's say you're using AWS and you load it to S3 to a bucket upload the image you can set that as a trigger so it says if a file is uploaded that triggers this function and the input to the function is the file and then the output is the new location so anytime a file gets uploaded there by your software it'll automatically kick off a function and it'll run and in that case because it's a very discreet process that doesn't matter how many happen at the same time because there's no dependency on another thing happening so if you're uploading one at a time that's fine 10,000 people log in at the same time they each load 100 it's no big deal it'll just spawn up an individual copy of that function for every single event that's triggered you can also they can be scheduled and things like that but that's like a super basic use case but in any bit of your code that is a wait for this and then do this can be a function like that yeah especially if there's no coordination that was the big point that you kind of like lost over there is no coordination between those steps and when you need to essentially do a join across multiple functions that's where it gets hairy and so people are starting to come up with problems in how to handle that right now but essentially you're using functions as a service to glue code in this instance glue services so like the events coming out of like a swift bucket or going to a message queue of some sort like these are all the different pieces and you're just buffering all these things and your transformations take place in the functions a common another common use case for them is with APIs so if you have an API and you have an API endpoint that they say run a get against to say give me a list of all these things the function that code that actually will run within your code that gets say from a database that runs a select and pulls it back that could just be saying there's a function so again if one API call happens or a thousand happen at the same time it doesn't matter because they're each individual activities. So I guess a good way to segue from this discussion is that you know like at least with Lambda right like there's this tight integration right like between services and I think this kind of goes back to my question so like this opens up for that infrastructure right like to be able to have use all these services whether it's in their network whatever you need to do not in the world like swift and then you know say well if something comes from Swift I'm gonna put it in a queue and then something comes back and then I grab the function whatever so like it they need to be there right some sort of hooks so that way that whatever platform you're using whether you're still paying whisk or whatever can actually do that. Yeah and I mean I think that's where the if you're building a project and you're like hey I have open whisk and I wanted to be able to do these things specifically for open stack you may say hey I also want to like log into rabbit and watch this queue and do this I don't think it needs to be in the open stack big tent to do that right it's just because it interacts with open stack doesn't mean it has to be an open stack project. Hi gentlemen Scott Fulton from the new stack a huge chunk of the content this week here at open stack summit is about Kubernetes if somebody who was dropped kicked here from three years ago might get the impression that Kubernetes was an open stack project just right along there with with and sender and everything else so I'm wondering whether you have the impression that perhaps what Kubernetes and open stack appear to be doing here hand in hand now could be a diversion from serverless goodness or is it headed in are they together headed down the yellow brick road in the right direction? I mean to me it's kind of interesting coming from building open stack for years and then when Kubernetes came on scene because you essentially saw this really fancy new project with backing from Google which immediately gets your attention but when you look at it it's still sort of just another IaaS product there's some past components on there with this orchestration what you can do with containers themselves so your actual compute aspect of it is much smaller but really open stack and Kubernetes are very similar in what they actually provide to the end user and the developer experience serverless is definitely breaking from that comparison and that's why it is interesting and can stand on its own. Yeah I mean I think it's kind of too early to tell from that perspective because if you look at some of if you go out and just Google open source serverless to see what projects are out there there's a bunch a number of them are actually built on top of Kubernetes so they like I said is in general a lot of these serverless tools just spin up short-lived containers to do the thing and they're using Kubernetes to do that for them so it's I think the model though of open stack and Kubernetes working together as totally independent projects one doesn't have to join the other to be able to provide value to customers so there's plenty of customers running open stack and running Kubernetes on it and there's nothing bad about it because one's not open stack is not part of Kubernetes and Kubernetes is not part of open stack there's still plenty of value which I think is a pretty normal operational model in open source besides open stack and I think maybe that's the kind of readjustment we're in right now which is like well everything doesn't have to be an open stack to work with open stack. I mean and also Kubernetes really does offer that layer of abstraction right so like it kind of removes the whole interrupt type of thing right so it's just a matter of like if it's running in Kubernetes you can move pretty much to any cloud that can run IS right? Yeah I mean I think it's there's a devil in the details type of thing right there of how you deploy it and they said some of the hooks and how that comes in but I think they're all positive models when and I think the idea is the focusing on that developer experience of like how do we help the developers be more productive quicker and instead of saying no you're doing it wrong you need to do it the way we think you should do it but saying like oh you want to do it you want to do it containers we can do that you want to do it with serverless we can do that we can provide you the tooling that makes you productive I think is the key approach and not force and I think that's some of the mistakes we saw with Paz was like do it exactly this way and it'll be awesome but if you don't want to do it our way then just get lost so I think having that flexibility I think gives the developers the best choice to make the decisions that they want to make Any other questions? Hey guys my name is Michael McHugh and I also work for Red Hat so this won't be a softball but so you were kind of mentioning how I guess the functions are good in situations where you don't have like complicated dependencies between different pieces of whatever you're building and I'm curious if there's a point at which it makes sense to use like distributed computing frameworks like Spark or Flink as opposed to using you know functions is there like a aside from that one differentiator between you know interlinking the parts is there one place where it might be better to use functions or a distributed computing framework I mean if you're streaming in information then you can grab it or if you can view your information coming in as a stream you know like a stream of files that are coming in to modify each one I could easily use a distributed computing framework to do that or I could use something like you know function as a service do you think there's a differentiation there? Who manages the distributed computing framework for you? Well like Kubernetes for example Okay but like that's what it comes down to in my opinion often times is who's actually operating that fundamental abstraction for you like that system underlying it is it do you have to be involved intimately do you have to understand how it's deployed in the exact architecture? Well I mean let's say we're going with like kind of the magic approach here like either I'm using Kubernetes to you know spawn up containers that run functions or I have some long running you know HPC process that just ingests data across a stream and I'm letting Kubernetes manage it for me you know manage the elasticity of it you know it do you think there are greater differentiators between these two platforms? I mean I think I think that's where we have a lot of uh they're like we're getting into some of the minutia with them and and it and it has to do with the the the deep technical need to have one thing that does all the things and we're always looking for the way to like how can we do this um you know same thing with you know containers hey Kubernetes is cool now it became the how do we do everything in Kubernetes and get rid of everything we want to get rid of everything else and it was like okay now the serverless thing is this how do we do everything in serverless and get rid of anything else so I think some of these tools are very to your point very similar if slight differences and I think it depends very much on your workflow and and like you said if if if it's mostly HPC type data but you want to do some of these other things um I've seen some people look at it you know leveraging that same infrastructure to take care of that piece of it too I think I think it's very dependent on the not just that that specific workload but your overall environment um for what makes the most sense and adding another tool isn't yet obviously we don't want to add more tools always but sometimes it's it just is what it is I mean I get so like I guess it goes I wanted to disagree but I can't disagree it is you know like the longer you hold a hammer in your hand everything starts looking like a nail right so like there's use cases and architectural you know like decisions that were made for that particular thing so like the way I can think just generally speaking with that you know you have a streaming you know processing data processing like Spark and then versus a function as a service you know you need to be taken to consideration the consumption model right so maybe like if you're going to have a large stream of calls to the function you might not make any sense to have it a function you might as well just have it running all the time in a container or Spark whatever it is so like there's obviously some multi-dimensional you know decisions that you need to make both organizational and technically speaking before you so like it's not like you know it's not the cloud for everything it's not Kubernetes for everything it's not you know like function versus service they have their use case and you just need to make sure that if it's well and I think another piece or two is some of the things that differentiate them now are actually arbitrary right so like hey functions only run for up to five minutes why well because that's all Amazon did first and so that's what everyone else does there's you know like why can't I have a two day long running functions like well is that's a long running process and then now we're getting into some of these semantic battles versus just like well that would be super useful for me if I could do that so I think some of that stuff it's still so early that we're not even sure what the best way to do it and then to me another big issue which we haven't talked we talked a little bit about those tooling around it and Billy I mean what do you what do you what do you see is the biggest challenge there with trying to operate you know data like cool I built a function I put on lambda I ran this thing like that's awesome but now hey we're going to go all in on this we have a bunch of functions how do you build them how do you run them how do you monitor them I mean again it's all operations it's just you still have you can't get rid of operations like the whole no ops thing is a complete lie it's just you were no operations focus on the application you still have all those metrics you still care about the business because at the end of the day like we're still all most of us here I don't know who works in education but we're all probably trying to make money and that's solving some type of business need we're just using serverless for it and we have all these same problems they're not going away they're just changing shape and all those patterns that we've used for the last you know 20 years or more like they still apply those lessons still apply in this new realm or whatever you want to call it it's just changing shape the tools are a little bit different the frameworks are named different things like serverless or JAWS or as it used to be and it's the same story I mean we've seen this again when you know cloud kind of came out the first time when virtualization went from just you know on VMware on-premises to actually hosted by somebody else when AWS launched it's a different change but the patterns all still apply yeah they're like well we don't need test admins anymore because the servers are virtual like no I mean even I mean it's it you know I think the best way to think about it from a like everything after the code commit is ops so today okay we're you know there's some people doing the packaging and the running there may be different people running the servers or running the containers so it's like some of those people now work for the provider or another group in your company but all that stuff still happens and then now okay well I'm just giving you code well that the application ops is still still a thing and still needs to happen so again I mean on the tooling front I what this is it's a this is my personal challenge with serverless regarding tooling is the thing about the things you do today say where it's open stack you have tools to collect data right you have logging you have some sort of monitoring set up for for that you have either using a distribution so you have a vendor that's building you packages or you're building your own there's kind of this accepted model of how you operate that just doesn't exist today it's it's very it's starting in serverless there's some companies starting to come out giving some of those pieces but it's one of the challenges something as simple as for me Python I ran into like right away I was like oh there's several things that'd be awesome so I wrote I had this thing and I uploaded it to Lambda and I try to run it and it failed and it said some cryptic error and I researched it turned out to be oh well I built this Python thing on my Mac laptop and you have to include all your packages with you and you put it up there and it's like oh this one actually compiled so it's it's that's Amazon Linux and you compiled it for Mac and so that doesn't work so then it's like well now how do I handle that oh well I'll just spin up a sent machine and do that once and oh no now those packages update so now I need like a regular pipeline to get the versions of those packages into my because I still I want to develop on my Mac so you know then it's you start building these things very very ad hoc and you're starting to see some some pipelining tools for serverless start to show up on the monitoring side you're starting to see companies come out like like IO pipes and stuff to do some monitoring things with serverless but it's so early the tooling you're used to just isn't there yet even stuff like blue green deploys we're just finally starting to see and having people talk about and with a continually running process like that it especially with messages they get backed up in a queue for some reason you have to think about those types of problems and we're just discussing them now these are all like old school distributed systems problems that we've had for years that we have been able to kind of just avoid and and not have to talk about but now I feel like we're getting to the point where we don't have to deal with operations anymore we can't actually can't even blame that we're dealing with operations because it doesn't exist from like infrastructure perspective so now we have to actually focus on the business case of okay how do we handle these messages do we have a dead letter queue what's that look like who actually goes and checks for the dead letter queue when it gets too high you know how do we check all these from a business perspective you know end-to-end process that we're actually doing what we say we're supposed to do yeah I mean I think I've seen that even with you know traditionally teams that were building VMs if you are VM images like well now they're actually building the pipelines to build containers and serverless and handling someone's operation those next layer operation deals where it's like well how do you get now well the developers have a script that they use that package this up and then we just put it in a VM it's like well should the developers really be spending their time doing that it's like no that's kind of an operations job so but you're too busy to your point spinning up VMs or installing you know and patching servers like well then you now have this time to work on this kind of stuff that's my favorite thing about serverless I don't have to patch infrastructure for security somebody else does that for me I have to patch my application security and watch out for all that stuff but I don't have to worry about infrastructure now what about on the you know we talked a little bit about the consumption model though one of the big advantages of public serverless like Lambda is it's theoretically infinite scale right it's not right if you've ever logged into to EC2 and requested machine in a AZ and it's like sorry I'm out of machines right now it happens so it's theoretically but it's pretty much for the average developer it's unlimited how do you handle that in a private scenario like how do you figure out how much to build you know you just kind of do it by year I mean capacity planning has always been a problem it's private cloud you have to take that on and understanding your workloads and honestly kind of constraining those workloads you're gonna have that same problem that maybe you can get more efficiencies between all the serverless aspects that you're running within private but you're still gonna have that scaling problem I'm Nitin from Teradata I have a question about the serverless model or the function as a service what kind of security is provided seems like it can have serious security issues around this model if you can let some function to be executed so you have to give some permissions and then depending and how do you manage those permissions and how do you make sure it's not doing something weird or trying to delete some data instead of converting data something like that so it gets interesting because you are essentially counting on your serverless provider whoever that happens to be whether it's like Lambda or Google Functions or Azure Functions or OpenWisk you're assuming that that provider for your actual execution is handling the security between your function and the hypervisor but then also you have to think about from an application model if somebody gets your keys and can push functions into your execution environment they could absolutely get access to your database depending on what if you're using like IAM type stuff security modeling what they can actually access to and delete from your databases at the function level yeah and if you have multiple functions that do different things and this function all it does is read stuff from the database say for an API call it's handling the security around that to say well this for some reason it got compromised the image somehow and can delete data or even just it was bad code someone pasted the wrong SQL statement in there and now it's deleting stuff out of the database when the function runs it's restricting those pieces as well and that goes back to even what accounts you're using and then that also goes into how you passing credentials into your serverless function but I wanted to go back to the fact that you were talking about execution in that if you have a function that you know should only read from the database you can put a security policy around that specific function for that specific thing so even if the code changes out underneath it that security policy is going to block that function from actually making out any like change requests essentially there's also considerations or whether the function it might be like an internally function I think like a trigger by a different event or if it's that an API facing you know type of thing right that can be hammered or it can be something like that so like there's definitely security and you know privacy considerations that we had there too yeah and I think one of the pieces is so serverless by itself like you don't run a web server inside a function like you just don't write but you could don't do that don't do it but in general say like so generally you have some sort of API endpoint whether it's in you know if you're in Amazon years in the API gateway or you know the whole pile of different API gateway type bus projects and things like that is that's generally what's calling your function so then that's even another place from a security perspective to say am I sanitizing my inputs so I don't get a SQL injection or something into my function so what's the code that my API endpoints then passing to my function is another thing to be checking on too yeah it seems like there's another layer is required to manage the permissions or the authentication of the functions like what is allowed and what is not allowed and I don't see there's any mechanism existing today in the rest of the world it just varies by the amount of supply to the underlying price so I am in Amazon has a lot of that stuff some places do some Keystone doesn't have some of that I mean like that level of granularity right yeah so that yeah so that's some of the things so to going back to kind of where we started with OpenStack is do we need a serverless project in OpenStack no but like we may need Keystone to have more granular policies we may need Swift to have a way to subscribe to file changes for you know triggering serverless you may want to add we may need capabilities brought to the projects and I think that'll start to mature to enable that yep so I guess let's do like a quick conclusion each of us I guess so like you know my conclusion would be this is like you know serverless like it's not really getting rid of the complexity of you know releasing apps or releasing your stuff is really just moving the complexity elsewhere right and then it's just it's not gonna make your application inherently more secure because you use it it's not gonna make your application any cheaper to develop or faster to so like it's just a lot of these things that are attached with the buzzword and marketing people love it and then you know like they have the hashtags and Twitter all the time about serverless so you know all the CIOs and all the stuff oh you know maybe we should look into that so so to me is a matter of just saying as both a technological tool in your tool belt and also a consumption model rather than seeing as a new panacea and I say serverless is just another tool it's a new layer of abstraction and you know given tools and frameworks we can make as many bad decisions as we want to this is just another venue with some slightly different outcomes that we can always make bad choices than have problems yeah this is it's the the idea that you know each new layer there's gonna be a new abstraction layer and the old abstraction layer just goes away and this is the new thing I think really just isn't happening right where it's like well we don't need eyes anymore because we have paths like paths is gonna go away because serverless is everyone's just gonna be in five years everyone's gonna be doing serverless and all this other stuff's gonna go away and it's just it hasn't been the case I mean people still running mainframes it's all those pieces are out there it's like what's the right tool to Bob's point in the toolbox and leveraging it and there's definitely plenty of ways to go wrong with it but there's a lot of really interesting ways to use it that's it's worth checking out all right hey thanks