 So please welcome our next speaker, Michael. Good evening. I know it's very late, and we're all looking forward to the beer, whoever didn't have enough yesterday. So I tried to keep it light, but I think it's an important topic. A little disclaimer, I'm not a security expert. I'm a cloud native practitioner who deeply cares about security. And all I want to do is raise awareness about tech vectors and good practices. What you can do, you should be aware of if you go in cloud native. So what the heck is cloud native? The clicker doesn't work anymore. So what is cloud native? You can say, well, I'm just using the APIs of a public cloud provider. That's fine. That's a very valid definition. Or you could go with the CNCF, the Cloud Native Computing Foundation, has a couple of things in there, things like containers and service measures, microservices. You can read that. You don't really see serverless in there, although there exists a working group for serverless as well. There are a bunch of projects that in there you might have heard of, like Kubernetes, Prometheus, specs like open sensors, open metrics, and so on and so forth. So how does the overall development and deployment flow look like in cloud native land? Well, we have a couple of things here. Source code configuration and our secrets, API keys, database, passwords, or whatever. And then we have dependencies, libraries, frameworks, whatever we built up, the JavaScript framework of the day. And we have hopefully everything in version control. The first step, what you as a developer do, and if you're a cool kid, you actually do that continuously, is deliver. You deliver your artifact into something that I've called up there an artifact repository. Depending on what you're using, that might be different things. You might put your functions there, your AWS Lambda functions. You might put your container images there. That allows the deployment system then, at some point in time, to actually deploy and run your application. And there are multiple layers. You as a developer might be responsible for certain layers. You have the infrastructure team or your friendly folks at your public cloud provider who look at other layers there. But overall, this is pretty much what you're dealing with. In a nutshell, containers and serverless are actually pretty similar. You have got certain things that you produce there, artifacts that you put somewhere. As I said, in the case of Kubernetes, you would have a container image. In case of Lambda, you would have a zip file. You would put it somewhere, container registry, or three buckets. There are differences in the sense of, for example, event triggering, which is native to serverless, but not to Kubernetes. And because of the statelessness of the functions, you always have to put the state somewhere else. We'll get to that later on as well. There are a couple of other things, but the main point here is really the billing. The main characteristic of a serverless system is that you only get built for what you're actually using and not for the whole runtime of the system. And a little side note, if you plan to lift and shift an existing system into serverless land, then think twice, you will need to re-architecture. So let's have a look at Kubernetes. That's an example of a portable container management lifecycle system that has a bunch of declarative APIs and control loops that essentially trying to reconcile the observed state with the desired state by the user. And it's very robust, flexible, and extensible. And you can see there are many, many moving parts that you potentially have to worry in. Actually, that's how it looks like. You end up with a lot of attack vectors, both the control plane and in the data plane, where the entire state is stored in AdCD. There are many, many places where an attacker could actually potentially get you into trouble. Now, there are a couple of things that you can do and you should be aware of. CICD pipelines typically in this context look a bit like that. You have a base image, and hopefully that base image has been vetted and put together by someone who knows their job. So typically someone with an admin ops background or a vendor who knows what they're doing. The application developer would then provide their source code, and then you would have a process where you would build the container application image, put it into a registry, where you then would do automated scans for CVEs and maybe potentially have things like grapheas to actually decide if a certain artifact or certain container image that has been created by a certain developer can be deployed into certain, let's say, namespace-like product. One basic thing that you should always do in Kubernetes is using the service accounts, not the default service account, but defined service accounts. A service account is essentially an identity for an app. So this allows your app to talk with the API server. And if you don't do that, all the other things, like RBEC, don't really work out. So always create a service account there. And that's rough flow of authentication authorization options. There are many there that have highlighted the ones that, in general case, are the preferred one, X519 certs, and RBEC is kind of outdated. And you really want to use RBEC. They are all based access control, where you have a fine-grained way to say what the application is allowed concerning certain resources like pods and services and so on and so forth. In terms of secrets, first-class support in Kubernetes, bottom line is they are, by default, not encrypted at rest. So you would need to use a system like HashiCorp Vault, for example, or something that your public cloud provider gives you to actually encrypt them at rest. Otherwise, the UX is quite nice. Declaretively, you can mount them into your container through a volume or environment variable. Declaretive, everything there. But as I said, by default, not encrypted at rest. Networking can look quite scary sometimes. You have communication going on within a pod. If you have a sidecar or whatever, if you look at things like Istio that has Envoy as a side car in a pod, you have north-south traffic. East-west traffic between services and pods. Within the cluster, one service talking to another. Here to note, by default, everything is allowed. So every service can talk to every other service. So you probably want to use things like network policies to essentially forbid certain communication paths to stay. Then you have north-south traffic getting ingress and egress there. Again, there are certain things like ingress objects that you can use, but typically you will end up using something in front of your Kubernetes cluster. Question of MTLS, so are these services talking to each other? Do they know what there are? Like things like Spiffy providing identity there and actually having mutual TLS between the services. And probably you end up using a service mesh that takes that and other things like observability of your table. And you can just use it and enjoy. So a couple of good practices there. Always use trusted base images to define a non-rude user. That is not the case for more than 80% of the images that you find on Docker Hub. So don't pull random shit from Docker Hub. Always perform automated CV scans. Use private registers if you can. And always use namespaces and service accounts. And obviously, RBEC, which is nowadays pretty much the default everywhere, kind of many of the applications I've seen that have been developed before. RBEC has been put in place as a standard. Nowadays have a bit of issues, but yeah. Nowadays it's, as I said, pretty much a standard. And use network policies which typically is an admin task. Moving on to serverless. What on earth is serverless? Well, serverless is really an umbrella term for a number of things that have something in common. And the thing that they have in common is someone manages that for you. So you are not provisioning things. You are not going there and spinning up a container having to worry about container-based image and so on and so forth. You provide the code and off it goes and executes it in the case of FAS. Or you have databases, data stores, object stores, and so on and so forth. Scales automatically, depending on the traffic, up and down. And you only pay for what you're actually using. For example, in case of FAS, Lambda in AWS, you only pay per invocation of that function. So we already talked a little bit about that principle here. You have some kind of trigger that could be, for example, an upload of an image into F3 or a HTTP call comes in through the API gateway. So you have some kind of event-driven architecture. Typically, you have short-running stateless functions, so any kind of state needs to be externalized, which is both in read and write, the case, which sometimes leads to troubles in terms of state hydration. Again, a couple of attack vectors there. There are comparatively fewer, if you look at the communities there, but still it is possible, and there are a couple of, and I have that in the resource section as well, actual attacks there that have been demonstrated. Typically, if you're using some kind of framework and you really should be, you shouldn't be using low-level commands there to create the buckets and the Lambda functions and so on and so forth. These frameworks may or may not have the best usage of IAM, so you might want to audit that and make sure that they are very strict. You can screw up in terms of the S3 bucket because the basic idea here is you upload the code to the S3 bucket and from that on the serverless, the Lambda runtime firecrackers we've learned recently is taking over and executing that. So if someone is able to sneak in some code there, that is certainly possible if you don't have the S3 lockdown properly and it's also possible it has been demonstrated that you can actually run arbitrary code in these Lambda functions. So the assumptions there that you always get a clean without any traces from a previous run sandbox is not always true and that is the kind of challenge that you have where on the one hand you want a warm environment that reduces your startup time. On the other hand, there might be potential traces from a previous run so you need to balance that. But don't assume that, for example, if you're using Java or Go that it's impossible to execute some Python on the side. That's the baseline here. Couple of good practices there. Do static code analysis. Do dependency vulnerability, scans, libraries, frameworks and so on, automated and have a look at what they're doing to the IAM policies there. Still input validations, still you have to do that, infections, there are possible. And make sure that you have proper secrets handling so ideally again use the things that your public cloud provider provides you there to have these secrets handled properly. And yeah, overall just make sure that you only equip your Lambda and whatever else you need there in terms of triggers and integrations with very strict IAM roles and policies. All right, the rest is really just a bunch of resources, blog posts and slide bags, videos that you might want to check out. And one thing if you're into Kubernetes, this rise from Aqua Security myself have put together this website and there's a book, a very small, 70 pages book from O'Reilly that you can download there as well. Essentially high level overview on community security. Couple of vendors that tried not to reinvent the wheel, most of them provide products, services, some of them open source for both containers or either container and serverless. So you might want to have a look, maybe you're already using one or the other and definitely check out what they offer, they bring a lot to the table. I think we have some two minutes left if there are any questions, but overall think of what can happen in these environments. They rapidly come and go, all these containers and functions, but still some of the traditional basic hygiene rules still apply. Okay, two minutes left for questions. No pen testers who want to rip me a new one, no. Cool, all right, thank you. Thank you.