 Hello, and welcome to the panel for CNCF. Today, we are an interesting bunch over here to talk to you about serverless and everything serverless security. To start off with, we're going to do a basic introduction. I'll just start with my introduction. I'd love to have a few words from each one of the members over here on the panel. My name is Ashish. I am the host of Cloud Security Podcast. I'm a CISO on my 925 and run a live stream on cloud security and cloud data security over the weekend. That's my short intro. Over to you, Andrew, for your quick introduction. I'm Andrew Krug. I'm a technical evangelist at Datadog. Done a lot of work on serverless stuff in the past. That's my short intro. I'll go ahead and point it over to Ariel. Thank you, Andrew. So I'm Ariel. I'm a cloud security evangelist in CISCO, in the emerging technologies innovation team. It's a new team that CISCO set up to tackle cloud technologies and security, among others. And I'll hand it over to Roja. Thank you, Ariel. I'm Ravishri and I'm a cloud security specialist at Nokia handling the cloud security for one of the largest private clouds called Nokia Enterprise Services Cloud. Happy to be here. Awesome. Thank you, Ariel, for coming. So I'm gonna kick it off because considering that's a 40 minute panel, I wanna start with the basic for a lot of people may not even know who we are, what we do. So maybe the first question to ask is what is TAG in the CNCF and how does anyone become part of it? Andrew, do you wanna just kick it off, man? Yeah, so the TAG group or the security technology advisory group is a group of folks who facilitate and collaborate on a variety of security topics across CNCF, right? So they do everything from architecture patterns, prescriptive guidance to white papers, like the one that we're gonna talk about a little later on in the panel. But basically this is just an open group that really anyone can be a part of. And I think that's one of the really cool things about all things CNCF, but also the TAG is that it's really open to everybody. So you don't have to be a security expert to show up and make a meaningful contribution. Is that only for serverless security or can people join any security project? People can join any security project and there's a variety of security products going on everything from like serverless to software building materials projects to things like gatekeeper for Kubernetes, like all that lots and lots of great work. So I encourage anybody who is interested in this panel you're probably the target demographic for the stack, right? Cool. Well, I guess there's also a price of serverless security one is the coolest panel. That's why we all here. So we should definitely encourage people to join the serverless security one. Talking about serverless security, maybe a good question to ask next or maybe if this is for you Ariel, why was there a need for a serverless security white paper? Isn't there like already a serverless white paper in CNCF? Right. So there is a serverless white paper and I think that was a great work which was done a few years ago to try to highlight what is the threat landscape. But everything in security, things are changing, serverless is changing, there are more services, there are more attack option, more risk which are being discovered. And I think it was a good time to refresh that previous work to take a more broader look both what the risk which are exist or what new risk vectors can serverless application face. On the other side, also to look at what are solutions, what things were improved and there's a tremendous progress in the different cloud providers which was made that made this new area require some refreshment that was I think part of the purposes behind this white paper. So is it quite, I mean, I would have thought a lot of the security is usually covered already. So what are some of the interesting threats you may have come across for serverless that kind of, I mean, makes it different and maybe drive the required requirement for a separate white paper, like an attack vector or a threat perspective, sorry. Right. So I think in serverless there are general, there are different attack vectors, the threats coming from different space. I mean, let me just kind of try to forget the buzzwords and talk about some details. The serverless functions really tightly coupled with different cloud services. And the way to see the full, I would say the full, I would say the way you can visualize your entire environment or the permissions that you grant to your environment and different configurations that you do on different cloud services has a significant impact on the actual risk of the serverless function. So I think in the beginning, there was the motivation to try and to draft what would be the risk model, what would be the different threats. Now we can see more and more services which are being used today together with serverless functions. We can see even new type of thing that are gonna be addressed like a software bill of material, like getting a better understanding of this different supply chain area of the different software packages in your serverless functions. So even if there are small piece of code, they still contain different packages or different, they use different services that create a different risk on your application. So I think all of it together, nailed down to a need to both address the serverless security from an updated angle. Did I answer your question, Ashish? Yeah, you did kind of answer it. Was there any specific examples that you wanted to call out for attack vectors for serverless? So I think we can just take a look and I think we can discuss it more in lens later in our panel about the new exposure of HTTPS, HTTP endpoint without passing through the API gateway to make it simpler and easy to use those endpoints. But eventually, you overlook, I mean, it's very easy and I understand the motivation why, for example, in AWS, they've released this functionality, but bypassing API gateway or bypassing load balancers, it avoids all of the security, a lot of the security mechanisms which are built into it to validate and to verify and to authenticate and authorize the request, which again, as an example, give put the serverless function which is being invoked in a different risk. So maybe this is like something specific to a new service that's right to AWS. But usually when people ask me about it and before going diving into like a specific attack which were crafted for serverless, I'm always trying to give an example, like look at the permissions, the fact that every function require a different set of permissions, well, by default, especially in large scale environment, reach the situation that you have permissions which are not clearly designed or tailored for this function. And then you're gonna find yourself in granting application, a lot of permissions to do or in a lot of resources and to do many things things which you're not supposed to, nothing in our, or at least didn't plan to. And the result is that if a function is reached, that the impact which you do is much larger. Now, when you look at, for example, what all the different functions and the different services or the buses that they are connected to, and again, under the assumption that you want them to run free, you want them to run in a very smooth way, you want to avoid performance, the gradations, the ability to secure them is of course much lower or much less. And this is why you need to be much more sensitive to maybe aspect that you are less sensitive in more stateful applications. Sweet, thank you for that. Andrew, do you wanna follow on that as well? Yeah, I think the bottom line here when we think about the attacks and threats for serverless is really to spin that and think about how the attack surface is increasing, right? So when I first started to look at this stuff in 2017, AWS Lambda, for example, was pretty new and there's like a couple of patterns for getting events in and doing invocations. And now if you think about it, you really have to kind of think about serverless security in like three different fronts. You have the runtime itself, you have the network perimeter and you have identity, right? And maybe that identity now is multi-cloud. So it's just more and more complicated. And as different runtimes bolt on more functionality to extend the capability of the runtime, like we have layers now and you can even bring your own Docker container that suddenly can be run as a serverless function, the more diversity that we get in those environments, they become more and more challenging to defend, right? Because that was always the sales pitch before is you have the shared responsibility model and then you have this very, very small thing that you can laser focus on. And as that small thing becomes big, we're increasingly challenged. I thought that's all. Do you agree with that Arial? Yeah, yeah. And I definitely just wanted one small and Andrew said, the attack vector is increases, but I'm not from the defender side. The ability to defend without creating friction or degradation or imposing some external banners into your functions is really small. This is why you need to be super careful in how you configure the monitor. Yeah, that's actually true. I was gonna add a few more solid examples as well. I think to what you said Arial as well about the API space, I think it's definitely interesting because and I think maybe this is a combination of what Andrew and both you said as well with the API as being, I guess the trigger for starting off an event or a serverless event, not having control over the API that can send a request to Andrew's point to be multi-cloud, it could be an IOT device. And plus the fact that now there is no workload protection or there's no antivirus that you can actually deploy on a serverless machine. I guess the monitoring aspect of it is like, what are you really looking for in a real-time context? It's a lot more complex. And I don't know how many people actually out there are thinking about, hey, we wanna deploy serverless, but then I don't have to work like Andrew again, share the responsibility, how much is mine, how much is the AWS or an Azure or a Google Cloud to responsibility? I think that's definitely another interesting aspect that kind of came out. I think the thread that makes me always cringe is the denial of wallet attack that people talk about, where because you don't really control the number of serverless instances that can be created, you could basically keep sending a request and it just keeps adding more lambda functions until we end up with a fat bill in the end. I think that's definitely was my favorite one that I was reading all about it. So at least that's, are there any attacks that you guys have found to be fascinating for yourself in serverless space? I don't know, Andrew, you wanna go first? I mean, there's a variety, right? And I think that there's been a lot of attention lately on the crypto miners that have popped up. Oh yeah. That are purpose-built for lambda functions. So this is kind of a interesting time, right? Because we have this thing now that is popular enough that people are making bespoke malware, but that's really not the attack vector. That's the post-exploitation sort of mechanism that the attacker is using. I think the most important thing to realize is that attack vectors for serverless are really just attack vectors. They're the same ones that are prevalent in our OWASP top 10 model. It's just that from a forensics perspective, because the environment gets thrown away at the end, it can be really, really difficult to reverse what happened. So if you think about something like deserialization vulnerabilities, really, really basic, right? But the evidence that's left behind is only as good as the login. Yeah. Yeah, that's very, very fine. Do you have one as Valerio? Yeah, I want to know what Andrew was saying. It's really an important point. How do you expect them? Because containers, for example, if you run a crypto miner campaigning your environment, it's easy to see the new amount of containers and new services. I say, hey, I'm not familiar with it. But in functions, which are ephemeral, so they execute and you only can see afterwards how much time or how many executions where and you try to think to record, much harder to detect this type of campaigns or this type of attacks. So as Andrew said, the attacks could be the same, but the detection is much harder. Yeah, I mean, maybe it's a good segue into what's probably a good practice to have for managing or at least starting off doing serverless security. I think some of the initial points that come to mind is to have a control over your identity, which again, we've called out already, maybe having some kind of a rollback, roll based access control for how much? Because I think the number of lambda functions you see with admin roles in AWS, and I'm sure the versions of this do exist in Azure and Google Cloud as well. The whole supply chain security has definitely become quite common as well. They have a CI CD pipeline that can at the end trigger a lambda or some other kind of a serverless function, but is anyone doing a validation whether it's authenticated or if it's restricted to a certain function as well? I think the one team that I've taken away from what you have Ariel and Andrew said is the detection part. And I think, Andrew, you have some thoughts on the whole logging aspect of this as well. In the serverless space, what do people do for logging? Because it sounds like, I mean, I can build the most secure serverless function with identities covered, I've got, I don't know, my code is really secure. I've got SAS, Dask running on it. SCA, my libraries are clear, but it would be pointless as Ariel said if there is no detection. So what are some thoughts on logging in this serverless world? Yeah, I think it's really interesting kind of right now what we're seeing evolve in the serverless space, which is that observability that we used to use for performance monitoring and just generally determining if the system is healthy has become the very same facilities that we need to do security, right? And in some ways our need to use those same logs for security has enhanced the way that we're doing logging, right? So simple things like the idea of having a set of standard attributes or structured log that you're implementing in code itself, really, really critical. And then things like on the cloud provider control plane side, ensuring that you're following some kind of unified tagging standard so that you know which pieces of an application are associated with providing what service. So when those attacks do come in and you do have facility for detecting that the attack has come into the serverless environment, you can immediately sort of create this graph. And when I say graph, I mean like graph like bloodhound makes, like graph that of the potential path of the attacker through the system, not just like a dashboard graph, but always be thinking in that sort of what are the services that are associated with it? What is the identity? And then what are the potential lateral moves from inside of that initial attack, right? Oh, something like a MITRE attack framework. A little bit, yeah. Okay, that would be, so to your point, I mean, I guess because what Ariel mentioned was quite interesting because if it doesn't exist, are there any specific things we're looking for logging then? Like are there, from a security perspective, what do you recommend? How does this help it? So I think it's important to divide this up until like kind of two efforts, right? It's like one, effort number one is like, how do you make the log so that they're readable by machines and by humans? Right? You'll hear people say like, oh, JSON, just JSONify all your logs and you're kind of done. Definitely JSON. Or even YAML for that matter as well. Oh my God. JSON is not super, super human readable. So when I sit down and I make a brand new serverless app today, you know, I'm really thinking in like key value, kind of like parsable single log lines that contain a set of standard attributes that then I can really easily use in a detection platform or something and maybe standardize those or remap them to the same ones I use for the cloud provider's control plane. So all of a sudden we're able to do correlation between events in something like AWS CloudTrail and what's happening inside of the application itself. So sort of just thinking about what are the pieces of data for my business, you know, my use case and serverless that I want to bring together to perform detections on the application itself? Also you're saying we can correlate, I guess, provided functions by your whoever your service provider for serverless is and the application logs to kind of hopefully combine some kind of detection. Is that right? Is that how you're thinking about this? Yeah. So if you really think about it, what you want is you want this entire chain of custody of attribution from the invocation of the function to the time that that function exits, right? And depending on the method by which you're invoking the runtime that runs the code, that could be a bunch of different ways, right? So if it comes in through API gateway, that's an event whose identity is API gateway. If it's a user calling invoke function, that's different. If it's a CloudWatch event on event bridge, that's a different story too. So being able to attribute those back to how did this thing even get started and then knowing what happened inside of it and knowing what happened afterwards potentially if somebody got an identity out, that's telling a real story. Yeah, and I guess to your point, if you can trace it back to what was the change in the function code itself, maybe that could be one more data point into that whole flow. If there's a data point change or if there's a change to the function who made the change and how did that travel across your production or wherever. Cool. Anything to add to this aerial or you're good? I think you're good. No, I'm good, I'm good. Perfect. Cool. So I think I was gonna say maybe it's also a good segue into, because I guess what we're also trying to cover is some of the interesting topics that came out of the whole serverless security by paper you wrote. So I'll definitely encourage people to kind of check it out when we release it. But one thing that we were looking also was into the whole evolution and what does the future of serverless look like? Obviously, everyone who's listening to the panel has obviously heard about the different kind of threats that exist. They've also heard about how do you log it properly? So if there is a change or there is a threat that needs to be picked up, there is actually Andrew's point is an end-to-end attack path that you can come up with. Maybe similar to attack mitre, maybe attack mitre may come up with a framework for this as well. But one of the things that came up with the future thing was the whole aspect of abstraction as we kind of keep going into more layers of I don't need to care about my infrastructure anymore. So that means I don't need to care about patching. I don't care about my workload protection. I don't care about antivirus. Then if I go into CI CD pipeline, well, as long as my function probably covers the whole basic application security function like your SCAs for vulnerable libraries or static code analysis, that code that is being maintained or code that is being pushed into as a function into a serverless function is clean from a security perspective has probably only lows, hopefully. Like those kind of things are going to be the only things people would focus on and maybe identity as the other one. Because I think as long as the orchestration and creation and generation is taken care of by quote-unquote shared responsibility by your serverless provider, I think that kind of takes care of what a future in this kind of world can look like. There's a lot more abstraction. But that's kind of what we won't have the future for. But I'm pretty sure as all of us are hinting towards the white paper, we kind of probably should go into what went through into doing the whole white paper. So maybe Raga, you could just probably give us an intro into the whole what was the process undertaken for the white paper since you were quite involved as well. Could you shed some light on that for everyone listening? Sure. So the basic process itself is already stated in this call. There was already a serverless white paper available. So the first step for us is to identify what are the gaps that was in the white paper and how could we fill it from the security perspective itself. So majority of the threats, some of the myths we had to burst and some of the best practices we can share with the community to enlighten and support in the complete end-to-end life cycle of the earth securing the serverless itself. So with this, the first step itself was to jot down all the aspects we wanted to cover in the white paper and second is to trigger one issue in our Stag GitHub. So with this, we got a lot of interest from the community. We assigned some of the project leaders and rolled out the plan and we had consistent meetings across a couple of weeks. We synced up I think once in a week and aligned with all of us, took the projects and we worked on our content individually, got the feedback and released the first set of version for the internal reviews as well. And once we thought we are in good shape and we are okay to go ahead for a wider audience review, the paper was released for the complete Stag security team's mailing list and it is now in the review phase of the community as well. So we are here and it would help us greatly if you can chime in, add your inputs and help us make this white paper even better. Sweet. And I can definitely watch for there are definitely interesting conversations during the whole whether something is relevant for the white paper, who's the audience, what's the context for it as well. People who may be thinking about, hey, I want to contribute because I've heard this four awesome people on a panel and I want to, I don't know, is there a mailing list or something or the other that people can subscribe to? Absolutely. How do they become part of this? Absolutely. Stag's GitHub, I think Andrew has already pasted on the chat. So Stag's GitHub, you can subscribe to this repository. There is a mailing list right in the Stag's repo. Please subscribe to the mailing list. You will get all the content up to date with whatever we are publishing. There are several lots of projects available right from the policy, S-bombs and serverless here. And we also have an interesting topic of Lexicon. So whatever you want to contribute, you can just pull up an issue and get the attention of our chairs and we're here to help you. And we have a wonderful support system in terms of mentorship or anything you need to get started or if you're a seasoned professional, we are seeking your inputs always. So feel free to get in touch with us. We have our Stag security Slack as well. So feel free to DM us and we're happy to help. Sounds good. And although, as I said earlier, serverless security probably is the coolest group and they've been the batch, but there's definitely a lot more projects in there. Got to wait if you want to work on S-bombs, but software below material because it was a presidential order, but you can totally do that as well. There's an interesting group for that as well. But that's pretty much what we had time for here and any last comments from anyone before we kind of close because I think we have an interesting bunch over here. And I think if anyone has any questions, as Raga mentioned, feel free to DM us on our Slack or send to the GitHub security, Tag Security GitHub. And I've opened up an issue or just message on to the Slack channels is relevant for the project that you want to get involved in. But that's pretty much what we had time for. Thank you, everyone, for joining the panel. If you have questions, as always, reach us on our favorite Slack and GitHub channels. Otherwise, we'll talk to you soon on the next panel, maybe. Thank you. Thank you. Thanks, everybody.