 Hello everyone. Hope you are having a great time at the OSS Summit. I'll be speaking today on one of the most trending topics in cloud native ecosystem, which is serverless and how it is managed on Kubernetes. A bit about myself, I'm working as a senior DevOps engineer with SAP Labs India. I'm a speaker in various open source and cloud native community events such as CAPCAM cloud native, collaboration, CNC of data management user group and hashicop user group. When I'm not speaking, I do often write articles around past cloud native and serverless technologies on various social media platforms. The idea behind serverless is to allocate dynamic resources to run tiny little function in response to certain events or triggers. When we look at the evolution, we see that it has moved from physical machines to managing virtual machines and then to containers. So what serverless provides us three different aspects. You run code, not servers. Well, serverless provides a platform agnostic experience for the cloud developers who can focus on managing their application logic while the complete management of the backend infrastructure is done by the cloud providers themselves. Pay as you grow. So the cost estimation model here is dependent for the time on which the resources are consumed, not on the actual provider capacity. The availability and the scalability. It is with the same property with which the serverless deployments are done that the underlying systems can handle the load balancing and allow provisioning of the infrastructure based on the needs and demands. So how serverless works. The user authenticate himself to the application. And invokes an event. The event basically results in running certain developer logic, which is executed in a container backed by a cloud function. Whenever there's an update, the developer simply uploads a newer version and figures the process while not requiring to manage the underlying backend infrastructure and the changes that is required during the update process. So why should we consider running serverless on kubernetes? Well, kubernetes has become the de facto standard for deployment managing and scaling your countries workloads. What kubernetes provides is that it avoids vendor locking. It provides a single platform to run both your countries based deployments as well as your serverless based deployments. And so you can leverage your existing services and data. It can support a diverse set of infrastructure, whether you are running on bare metal servers or on public cloud providers such as AWS, Microsoft Azure or Google cloud platform. It simplifies the process of doing performance analysis and monitoring through integration of monitoring tools such as Prometheus within the platform stack. A look at the general landscape from CNCF basically gives us a wider set of open source frameworks and projects which are contributing towards the serverless technologies. In this picture. This is just to highlight the developer experience in running some of the most popular open source runtime, such as K native from Google open fast, which is created by Alex Ellis and his team function from Oracle fission from platform nine. Kubeless from bitnami and aperture penvis from IBM. This tools collectively help the developers to evolve beyond the microservices and create serverless application architecture. If you look at the adoption, the serverless usage has grown to about 46% in the year 2019 in one of the reports which was published by new stat. Here we can see AWS lambda and Google cloud function occupies the top two positions in the leadership board because of the larger adoption among the end users. However, in the same place we can see there are many new entrance who are also coming up in the ladder, showing us a good amount of adoption of serverless at the community level to drive different business use cases. In the continuous adoption of serverless at the community level, there are certain areas where serverless has fallen short. First one is about cold start, which is one of the key functionality that is driven from AWS lambda function, which basically tells the delay in time that is required for invoking a function with respond with response to certain events or triggers or during initialization This has basically resulted in creating latency between communication between an application and a services and has impacted the performance. Logging for managing the request coming from multiple components and services. So what we require is a distributed monitoring mechanism for managing the deployments. A quick visibility and observability so as to ensure that we have a very simple troubleshooting and optimization process. Security in terms of vulnerability management and compliance of the serverless platforms. Limited execution environment for running fast functions. This has been often a bottleneck because for example AWS lambda functions requires a minimum of 1.5 gig of memory to start. While in certain requirements, we do have a larger requirement where it fails actually so we need a execution environment that can be modulated based on user requirement. The change in the DevOps culture and mindset to adapt to this serverless based paradigm. Flexibility of team responsibilities and helping people to train and learn about this serverless technologies so that they can function within a team. So what are key takeovers from here? Kubernetes is a big step forward to reduce the sysadmin effort but it doesn't reduce completely the effort to zero. The serverless approach is the next possible step which allows to focus on innovating applications and solutions. Basically helping developers to build their application logic rather than focusing time on setting up and managing the infrastructure to run their applications. There are greater open source frameworks which support serverless on Kubernetes and which are coming up along the time and this is backed by a very strong tech community. While there are certain tradeoffs which is convenience that is locked in versus the control the effort needed to manage the deployments. Some of the resources which I use for shaping up my talk and with this I end my talk here and I'm open to questions. Thank you.