 So, good afternoon everyone. Thank you for coming to my talk. So, today I am going to talk about files with Carter containers. So, this is basically about one of the latest emerging technology that is files function as a service and how are we trying to make it more secure with another very exciting project that is by OpenStack that is called Carter containers. So, this work is mostly being done by my colleagues like Mritika and Tagoo but they could not come here because of the travel issues. So, I am going to present their work here. So, to start with this is the agenda for my talk. So, I will be briefly discussing about how FAS evolved and what actually FAS is and what is the advantage of using FAS in today's cloud computing domain or telcos and or anywhere. And then I will talk about the problem statement that we are trying to fix with Carter containers and what is the actual problem that we are trying to solve here. The next topic is about the solution that is actually the Carter containers and how it is fixing it and then a brief introduction about Carter containers and finally we will go on to how we could approach running FAS with the Carter containers. So, let us get started before going into details about the serverless architecture or function of the service. I would like to talk about how the cloud computing architecture has actually developed, emerged throughout the time and we will see that in the first one which I am talking about is the monolithic architecture. In the monolithic architecture, it means we are running everything as a single unit of service. So, we have everything running on a bare metal node and we say that it is a single point of failure like let us say anything happens wrong with your application. Your organization is going to suffer a lot because of monolithic architecture. So, that is where there was a need of having a multi-tire architecture and that is how our entire application was evolved and we generally run like multiple services. We have multiple layer of services and this is mostly a client server architecture and this presents mostly three layers like three layer application which presents the presentation, application and the data management. So, these services are actually more granular than having the monolithic architecture and you have the developers have greater flexibility like let us say for an example, something happens wrong with your application server. So, you just have to fix that part and you do not have to actually change anything on the other tier of your application. So, that is how we are gaining agility and flexibility with the monolithic architecture but as we are running this inside a VM and it takes a lot of resource, there was a need of something else that come up as this is now called as the micro services. In micro services, we have like multiple service running and both every service has their own business logic. So, all these services run inside a container or you could even run them inside a VM but to name these technologies, we have like infrastructure as a service, platform as a service and then the container technologies, LXE, Docker and the Rocket one. So, we run everything inside the containers and with containers it is so easy to pack an application, scale it, like port it to another system. So, it has really made a developer's life really easy but with that, there comes complexity where the application architect has to actually manage the server, the code repository, the load balancer and whatnot. So, there is a lot of complexity and it needs a mature devop team to handle that complexity. So, that is how the Nano services or we call it as serverless architecture or function as a service evolved and in that, we are just running a single piece of code the developer does not need to worry about how their servers are going to be launched, how it will scale up, everything will be handled by the cloud provider, application developer or even the architect does not have to worry about any underlying details. So, that's how cool function as a service is. So, now, we have seen how the architecture has evolved from the monolithic architecture to having serverless architecture today. Let's see what function as a service is. So, before going into detail about the definition of function as a service, I would like to talk what actually a serverless architecture is. So, in a serverless architecture, a developer need not to worry about any of the server details like how their servers are being created or spawn or maybe the scalability issues or upgrade or any kind of installation. They just need to write a single piece of code and it will be managed by the cloud provider. So, this is called the serverless architecture. Now, what is FAS? So, FAS is a mean of enabling serverless architecture on our cloud platforms. So, by FAS, it's a serverless architecture and the developer, of course, does not need to worry about any of the underlying details. They just write the single piece of code, upload it to the cloud provider and it will be run. It is an even-driven function. It will be run and the developer could have the output. So, it really makes developers' life easy and no server provisioning. Servers are auto-scaled and the functions are executed quite fast because it is so granular and it's just a piece of code or a function. It runs really fast and you just have to pay for the time you are running your functions on the cloud platform. So, some of the characteristics to name from my introduction to the FAS, it is latency tolerant. It is even-driven and it is short-lived and periodic. So, now, this is just an example of a Python function that the source is from the Google functions. In this example, the developer has written just a hello world function and the developer can run this function on any of the cloud providers that they have and just have the output. This is just a single piece of code where it is running a hello world program. So, there are multiple providers for function as a service today like IBM OpenWisk and Google Functions and AWS Lambda. So, there are multiple providers who are actually working on enabling function as a service on their cloud platforms. So, now, why FAS? We have already talked how cool the FAS architecture is. So, these are some of the advantages that we get from enabling FAS on our cloud platform. So, for example, higher scale because the code is granular and we don't have actually to run the servers or manage the servers because it is mostly the common servers being shared by multiple users. So, you could run multiple like 10x number of functions as compared to the VMs with FAS and then lower overhead. So, we could also run like pre-allocated VMs and containers for running these functions inside the VM or container and if you run these functions inside VM that has a memory overhead of 2 to 4 GB whereas if you run those functions inside the container it is a really low overhead that is 128 MB to 3 GB. And because the developers or the cloud architecture doesn't have to worry about any of the platform details or any of the server management it really reduces the scaling cost, the development cost and of course there is no need of any operations team for managing the hardware. So, it also makes operational management really easy. So, now the problem. So, what is the problem that we are trying to fix here? So, as you all know that the Docker regular containers are not secure. So, I'm talking about the dirty cow issue here. So, in this diagram you could see that we have three containers. Let's assume that container A, B and C are from different customers. So, like if container A is running some malicious code and it tries to compromise the Linux kernel it could gain access to other containers like other services, other containers from different users. So, it is really not safe to just run your function inside the containers. So, now what is the solution? I'll just come back to this. So, now what is the solution? How are we going to fix that? So, what you might already have heard about Cata containers or the virtualized containers. So, what they try to do is they try to run the containers inside a very lightweight VM which is really low memory footprints and which have been actually optimized to boot up not as fast as container, but approximately as fast as containers. So, in this diagram you see that we have a container running inside a VM and then let's say the container A has some buggy code and it is trying to it has compromised the Linux kernel. So, it is only going to compromise the container's kernel not the actual host kernel. So, that's how your container B and C are secure with using Cata containers. So, now a brief introduction about the Cata containers. It is a project that is being managed by OpenStack and it is mostly like a collaboration between Intel Clear Container Project and Hyper Run V Technologies. And as I have already told you that they try to run containers inside a very lightweight VM. So, you have the speed of containers and the security of VM. So, best of both the worlds together. So, now let's see this is the final slide of my presentation. So, here we are actually seeing how can we run functions inside Cata containers. So, today most of the functions are being run on Kubernetes. So, Kubernetes on the back end uses the Docker containers which I have already explained is not as not secured as the virtualized containers. So, in the first diagram you see that we are running the functions inside the Docker container like on top of Kubernetes which is not very secure. So, what we are going to do is we enable Cata containers with Kubernetes because you can seamlessly run Cata container today with Kubernetes. So, in this approach you have a layer of Cata and then Kubernetes, Cata containers, Cata container launches the VM and it runs the VM the function inside that. So, you have the security of container security of the VM and the speed of the container. So, this is actually work in progress. So, if you have any questions you could reach me or drop any comment on the reviews and also you could reach me on ILC. So, thank you so much.