 Good afternoon, everyone. My name is Edward Marin and I'm a researcher at Telefonica Research. Today, I will talk about the secretive serverless, a new computing paradigm that has experienced significant growth over the last few years and is expected to become the dominant pattern for cloud computing in the future. Over the last decade, we have witnessed significant advancements in cloud computing. These were intended to simplify the development and management of applications as well as reduce the cost of running. These applications in the cloud. In the early days of cloud computing, cloud providers gave users the ability to port their monolithic applications to the cloud. Virtual machines were key to achieve this, offering strong isolation while providing users a sense of having an infinite amount of resources. The main downside of this approach was that users not only had to manage the virtual machines themselves, but also had to develop mechanisms to scale their applications. Motivated by these limitations, it was proposed to decompose applications into smaller, independent components known as microservices and place them inside containers. This approach increased portability. It provided lower startup times compared to virtual machines and it's allowed for greater efficiency. But it also came with some important limitations. As all containers in a whole share the same kernel, containers offer weaker isolation guarantees than virtual machines. Also, software developers still need to configure and manage the containers themselves. In addition to this, a common limitation of the previous approaches is that they rely on a static billing model where users pay a monthly fixed amount regardless of the number of resources they consume. Serverless allows us to overcome these limitations. First, it allows software developers to outsource all operational and infrastructure tasks to cloud providers, allowing software developers to focus only on writing the code for their applications. Serverless proposes to the couple storage from computation. The application logic is divided into a set of short-lived stateless functions each running inside an execution environment. Storage is provided by cloud services such as S3 or DynamoDB. In serverless, scaling of applications is managed directly by cloud providers, meaning that software developers don't need to worry about when and how their application needs to be scaled up or down. Unlike previous approaches, serverless offers a pure pay-per-use model where users pay only for the resources they consume. Cloud providers such as Amazon, Microsoft, and Google are already offering computing services to their customers. Meanwhile, several open-source serverless platforms have also been developed over the last few years. Before going into the details of serverless security, let me briefly explain how serverless platforms work. Functions are the main element in serverless computing. They run inside a fresh, isolated execution environment such as a container and are typically executed in response to several types of events such as a web request. Another important component in serverless platforms are API gateways, which expose rest endpoints to customers and act as a bridge between users and functions. If a function is triggered many times, the cloud provider can opt for creating new instances on the same function in other execution environments and redirecting some requests to them. Functions typically communicate with other functions and also with other cloud services. So for example, for achieving for storage purposes. Another communication is done via standard APIs. In addition to this, serverless platforms include a set of controlling functionalities. For example, one could be an authentication server that authenticates incoming requests before passing them to the corresponding functions. Another one could be the so-called identity and access management component, which is responsible for specifying the functions and cloud services that each function can access. Another well-known component is the so-called virtual private component, which basically allows to create virtual networks with all the functions that belong to the same application so that all the functions that belong to the same application can communicate with each other while preventing others from doing this. In practice, the serverless ecosystem is much more complex than that. The figure shown in the slide is a simple example illustrating a real estate website using serverless functions. Note that server developers can choose to write the functions themselves, use functions from third parties, or use property functions for which they pay licensing fees. Also note that only a subset of functions can communicate with the outside world. Let's now try to understand what the threat model of serverless is. In any serverless platform, adversaries can try to steal sensitive information, such as cryptographic keys or the application logic. They can try to exfiltrate data, control the function's execution flow, disrupt other applications, run applications without being charged, or even worse, associate their application cost to other users. These attacks can be performed by external and internal adversaries. That is malicious external users that leverage any of the existing external APIs. Malicious functions or compromised functions that try to bypass specified security policies, or even the cloud provider. We think that it's important to model cloud providers as honest but curious entities, since they can potentially learn sensitive information about users while running their applications. With the increase in diversity of attacks against the cloud, security and privacy are a key factor for the widespread adoption of serverless computing. At first glance, one could argue that serverless computing is intrinsically more secure than its predecessors. However, in practice, serverless improves some security aspects, but also makes some other matters worse. Let's first talk about the positive ones. For example, the fact that functions are short-lived makes it more difficult for adversaries to find ways to persist in their attacks. The bad news is that adversaries have already proven to be capable of bypassing these to mount their attacks. But despite this, it is positive for security that functions run for such a short period of time. With serverless, security is a short responsibility. Server developers are still responsible for application-level security, but the rest of the security tasks are now carried out by cloud providers. This is expected to eliminate a number of attacks against serverless applications. Due to the flexibility and elasticity serverless provides, it is now also possible to mitigate the denial of service attacks that aim to overload the server where the application runs. Despite disadvantages, we can argue that serverless increases the attack surface and introduces unique trade-offs and design choices that can negatively impact security. In the next slides, I will go more in depth on those negative aspects. There are three main reasons why the attack surface of serverless is larger than in previous approaches. First, functions are event-driven, which means that they can be triggered by many types of internal and external event sources with multiple formats and encodings. This clearly opens the door for adversaries to perform many attacks. As functions are stateless and are intended to perform a single task, they are required to constantly interact with other functions and cloud services to realize more complex functionalities. However, the definition and enforcement of security policies that specify how functions interact with each other and with which cloud services functions can interact with in such dynamic environments is very challenging. Finally, serverless platforms include several new components and cloud services, many of which are shared across users. These may lead to new forms of covert and side channels that can result in attacks into retrieve sensitive data or allow malicious functions to communicate with each other without the cloud provider noticing it. Ideally, cloud providers would like to develop serverless platforms that jolly maximize security and performance in their infrastructures and their customers' applications while keeping their cost as low as possible. In practice, however, experience has shown that cloud providers often sacrifice some security to be able to accommodate more users and to provide greater performance to their users' applications. One clear example is on the selection of the sandboxing mechanisms, as we explained before, with example of the VM and the container. Another example is on whether to use cold or warm containers. Cold containers refer to containers that are used only once. The problem with cold containers is that their booting time is often similar to the time it takes to execute the function itself. Therefore, the latency introduced with booting can significantly affect the function's performance. In addition to this, cloud providers don't build users based on the booting time of their functions. So of course they want to minimize it as much as possible. To solve this problem, cloud providers have started to use so-called warm containers. Warm containers are normal containers that are used to run several instances of the same function. They provide much lower startup times, but at the same time, they also introduce some security risk as best as could mount cross-invocation attacks. Another example has to do with the process of assigning functions to the host. This can be done using deterministic or randomized scheduling algorithms. While deterministic scheduling algorithms can lead to a more optimal use of resources and less communication overhead, randomized algorithms can offer stronger protection against attacks that exploit co-residency with the victim. All these examples show that there is a need to achieve a good balance between security and performance. One of the main advantages of Serverless is that cloud providers are now only responsible for conducting all operational and infrastructure tasks, including those aimed to protect their infrastructures and the workloads running on them from internal and external threats. At first, this is expected to reduce the number of attacks against Serverless platforms. Yet, this can also lead software developers to ignore security in their applications and to make unrealistic assumptions about the security measures in place. This could create a full sense of security. To make things worse, cloud providers typically keep all or most information about their backends confidential. This makes it difficult to scrutinize the security and privacy of such platforms. Some of the infrastructure aspects cloud providers tend to keep confidential, include how functions instances are placed in host, how the resources are assigned and managed, or how isolation is achieved among others. All of them have in common that they can impact the security and privacy Serverless platforms provide. In recent times, researchers have devoted significant efforts into understanding and documenting the way Serverless platforms of the main cloud providers operate. Their studies showed that despite cloud providers share the same goals, there are important differences in the way they implemented their Serverless infrastructures. Let's now talk about the main attacks against Serverless. We have identified three main classes. One of them is related to the application level The other one is more specific to Serverless. And the last one is about hardware attacks such as micro architectural type of attacks like meltdown or row hammer type of attacks. Due to the time constraints, I will not cover hardware attacks in this talk. OWASP has recently released a report explaining the top 10 security threats for Serverless applications. You can see them on the slide. You're probably thinking that all these types of threats are well understood by industry and academia. But unfortunately, Server developers keep designing applications with these kinds of vulnerabilities. Some of the most well-known ones, such as for example, injection attacks due to some data not being properly sanitized, can allow adversaries to fully control functions. Other types of attacks, for example, are caused by Server developers giving too many permissions to a function. Another classical type of threat is the one caused by using third-party components with no vulnerabilities. To mitigate the previous issues, it is recommended to treat every function as a separate security perimeter. It's also recommended that Server developers follow standard secure coding best practices. So for example, they should not trust the inputs that are received in each function. Additionally, Server developers should also follow the principle of least privilege because functions will eventually be compromised. And once a function is compromised, the goal should be is to reduce the amount of damage that the adversary can do. And finally, to prevent these kind of attacks, it's very important to secure data, both at rest and in transit. To secure data at rest, one can use cloud services and to secure data in transit. For example, one can use strong cryptographic protocols such as TLS. Let's now go a little bit more in depth on the serverless specific type of attacks. One of them is the one so-called resource-exhaustion attacks. The goal of this kind of attack is to over-utilize the resources of the victim to either disrupt the service or to impose excessive financial loads to the victim. So basically to perform so-called denial of wallet attacks. In such a complex and dynamic type of system, adversaries can also try to leverage inconsistencies in functions and cloud services. This remains unexplored so far, but I think it's an interesting avenue for research. Such sort of attacks are also very important in serverless platforms. They can come with different flavors. There are different ways these kind of attacks can be achieved. One of them is based on, for example, the access patterns or timing information. Then I also envision that there will be some intra-container and intra-host type of set channel. So I expect that there will be some new attacks coming in this space in the next years. And last but not least, there is also the possibility for adversaries to exploit the disk space in the temp directory. This is used to keep state that is used across different instances of the same function that run on the same container. As the future work, I think it's interesting to explore all the attacks that I explained before in the previous slide in depth. I also think that it's very important to better understand the cloud provider backends. For example, investigate the security of new execution environments such as Amazon's solution called Firecracker. And I also think that it's very important to mitigate all side-channel attacks. To conclude my presentation, I would like to say that this is a very challenging and a very active research area. Clearly, I hope that with this presentation, I convinced you that there are some very unique security and privacy threats and challenges that will probably come with new attacks and hopefully better defenses for the future. So I hope that you're convinced that serverless is the way to go and that security and privacy should be considered by design and it should be considered now that serverless is started to be adopted. Finally, I want to refer to you to a paper on serverless security that explores in more depth all the different aspects that I've been discussing in this presentation. You can find it in my personal website. And I also want to mention that we're organizing a workshop on serverless for mobile networks. You can also see the website there. And that's all from my side. If you have any questions, I'll be very happy to take them. Thank you.