 Greetings to viewers and welcome to this virtual presentation on patterns and practices to minimize provider lock-ins with serverless. This has been an extraordinary year and I want to begin with my profound thanks to the organizers of this conference. It wouldn't have really been easy, so heartfelt thanks for all the hard work here. When I submitted this abstract for this talk back in February 2020, none of us could have imagined how the rest of the year would unfold. In that same way, none of us could have imagined the amazing progress that has been happening in the world of serverless. Particularly, there is a cloud-native computing foundation that is associated with the Linux Foundation. There is a working group that has curated so much about events and functions. My goal in this talk is to share the work of the cloud-native computing foundation working group and acquaint you with the emerging trends, standards, patterns and practices that could be helpful. So let's dive in. It can be a bit confusing to use the word serverless because clearly there are servers being managed somewhere by somebody. But somehow, its significance to the architecture is a bit diminished in the current scheme of things. It's better to think of events, event sources and how they are invoking functions synchronously or asynchronously through an enabling platform which in turn manages the many prerequisites such as identity, the data and the access to the various functions in the backend. We have actually seen papers written about motivating the serverless architecture to think about the binding of function as a service which in turn ties to a backend as a service construct. I wanted to talk about what are all the choices that you have to think about in order to make sure that you minimize the lock-in that you may otherwise suffer from a vendor. Clearly the platform that you choose, you have to carefully think about that. How do you go about developing the events? Where do they originate from? Whether they come from a programmer's ID or a service or an IoT device. How exactly do you bind that to an endpoint? And how do you really manage the ongoing lifecycle of all of these changes is the crux of this talk. I wanted to take a moment to introduce myself. My name is Murli Kondiniya. I work with a very large financial institution focusing primarily around enterprise platforms. I do a lot of cloud-native work on the CI CD, middleware, and SRE. But my fundamental passion is around developer productivity, developing patterns and practices, helping developers make the right choices. So they are by default the easy choice. So let's dive in. When you think about serverless, it might really be helpful to start with a set of technical use cases, right? Serverless architectures or events or functions, depending on how you think about it, they are nothing but loosely coupled, asynchronous, concurrent, parallelizable systems. They have the means to scale up and down with unpredictable workloads. They obviously are stateless and short-lived. So they are highly dynamic, supporting change velocity. And when you think about the classical applications that tend to benefit from it, if you're really sending a lot of triggers and doing a lot of analytics, you're doing some stream processing or doing ETL, or even CI CD for the matter, these are all prime candidates for you to consider the event-driven architecture with a function as a service backend, right? So let's also talk a little bit about the business use cases, because we all work for enterprises in some form or shape. So when you think about the integrated account view of how end users are really interacting with the platform, they are traversing multiple services that are exposed to the web. And these transactions that you're really supporting with end users have to be managed seamlessly across a multitude of devices and channels. You really need to have a 360-degree view and a real-time understanding of what is really going on with the customer so you can really provide a unified experience. Now, think about all the things that you've really seen in the world of Uber, in the world of Airbnb and so on, where disruption is happening without you having to commit a whole lot of capital to build all of these enterprise services. You typically have the means to federate many of these services and you can appreciate the loosely coupled nature of how you're really integrating all these things to create value for end users. Depending on whether you're working for a bank or for any services sector, you can appreciate that you're going to reuse services available in the world outside and create additional value for the end user. So now with that background, what exactly are the choices for you? I mean, typically when you look at the options for you to deploy serverless architecture systems, you either have the option of deploying the platform in your premises, what we call as on-prem, or you could choose to deploy your systems on the public cloud. Now, I wanted to quickly share the major installable platforms that are installable on-prem, as well as the major providers in the public cloud who offer this as a service. It pretty much comes down to amongst the installable platforms, it's a matter of what flavor of Kubernetes that you really install this on, but predominantly the options vary from choosing Knative, which is from the Google community, which is an open source platform, and it basically deploys on to Kubernetes and you can actually take all of the other products that are listed here, you can also deploy them either on Kubernetes or OpenShift. So the conversation predominantly is on the pros and cons of how these installable platforms are easy to operate and run your business on and how you could possibly take those very same options and work with the cloud providers so they in turn can actually be deployed on the cloud so you really have the flexibility to manage the whole life cycle of how you go about developing it. We'll dive into that in a moment. So what I wanted to share, as I said early on, is the working group that has been busy within the cloud-native computing foundation, which has been focusing on getting all of these communities that have been hard of work developing tools and security capabilities, frameworks, hosted platforms and installable platforms and they have nicely curated this into a taxonomy and I wanted to share this graphic because this might really be useful because what I showed in the earlier slide was a small sampling of the major tools but there is a thriving ecosystem of vendors who are really creating amazing innovation. Some of these are open source, some of them are not, but you can clearly leverage the good work of this working group to see what the community is up to and how many of them are collaborating with the cloud-native computing foundation. So when you think about functions and the services, I made a brief reference to the various personas. You have a developer persona who is really focused on developing the functions. They are focused on identifying the events, identifying the functions. They are developing small units of code. When you really talk to the provider side, these are individuals you can think of them as operators. They are building APIs and subsets of functionality and they make sure that those functionalities automatically scale, they go up and down, they have a very efficient cost model and they don't really charge you when those resources are idle. So when you think about events and functions, it is about these two personas focusing on the business logic leading to economic transactions and they want to have the optimal combination of speed with stability. The developer rapidly iterates on the function, integrates with their CI process and they want to be able to test locally. That is a very key point. When we identify patterns, we will touch upon that and they want to be able to upload their functions to several platforms. The provider likewise wants to extract the runtime. They want to optimize on the middleware and they want to do the metering and monitoring so we get the best of both worlds. So when it comes to evaluating platforms, these are all the features that one really needs to keep in mind in terms of their selection choices. First and foremost, the business model. Are your applications really open to this loose federation that I alluded to before? Or are you really going to be stifled in terms of not being able to federate and therefore you really need to have them coexisting within your same enterprise? The second part is the technical suitability. What programming languages do you develop on? What are all the runtimes that are important? What are all the libraries that you are going to be dependent on? How do you want to do versioning? What are all the frameworks that your team is comfortable with? What is your development style and deployment workflow look like? How portable do you want your implementation to be? I'm just really giving a sampling of the suitability attributes that you want to evaluate because when you choose a platform or a cloud provider, they really have a significant batting. And then on the operational control, regardless of whether you are deploying it on-prem or on the cloud, these are all the attributes that you need to be thinking about. What does your environment really look like? Do you really have the means to make sure that in a multi-tenancy model you can have the resources dedicated for your consumption model so that you are not really getting penalized for your co-tenant consuming a lot of resources? How do you really make sure that you don't suffer any kind of interruption when there are updates to the data center or you're really dealing with some resiliency issues or in having some scalability challenges? What kind of a control do you have so that your business application is not adversely affected by the happenings within the cloud provider space? Now, regardless of whether you deploy this on-prem or on the cloud, these are all things that you really need to keep in mind because you want to make sure that the end users are not adversely impacted by these invariants. Okay, so let's talk about the lifecycle of how functions come into being again. I'm generously attributing all of this credit to the good work of the Cloud Native Compute Foundation where they have documented the whole lifecycle of how a function comes into being. You start with a specification and you start developing the code no differently than any other enterprise application but you essentially make sure that you do the build, you do the necessary test, and then you create the artifact that you want to deploy. In this case, there are small units of code, there are call functions, they basically get deployed and once they're deployed on to your platform, you won't be able to monitor and be able to scale it. But the key thing here is rather than really do it based on the idiosyncrasies of the cloud provider, what the foundation has done is they have created very clear steps in terms of defining the various lifecycle, create, publish, updating the label, execute, the event source association, et cetera, et cetera. You get a feel as to how the whole lifecycle follows the traditional software development model so that when you are really developing functions, it benefits from the same structured thinking in terms of the lifecycle. That is really important to keep in mind and not really be completely locked in to how the provider treats the lifecycle activities. The next thing here is, in reality, as you develop your functions and they keep hydrating, they will keep evolving. So when you start with the very beginning, you will start by creating a function. Once you publish it, you will rapidly iterate. You will want to create the next version of the function. And you want to be able to do this in a thoughtful way so that you can really learn from all the iterations that you've taken in the past. So subsequent versions may choose to go back to a prior version. You want to be able to do that and keep track of the history of how these functions came into being so you really have the means to assemble them in the right way. The other pattern, again, that has been published by the Cloud Native Compute Foundation is the various patterns of computing. When you think about the synchronous model where the web is generating a call into a gateway which in turn replicates that request to one or more functions. And then the messaging model is a message arrives asynchronously into some sort of an exchange and it gets queued up. And then the queue in turn gets drained into one or more functions. We have seen these patterns in the enterprise computing space. They will apply to the serverless model as well. And then the streaming model where through kinesis or one of those similar technologies, messages basically get routed into one or more partitions. They in turn really get picked up by one or more functions. They drain all of these partitions and finish the backend part of the computing. And then the third part is the notion around a master list of tasks that comes into a priority queue the master dispatches the tasks to one or more of these workers who in turn execute on those functions and then return the logic back into functions. There have been a number of patterns that have been published building upon these basic premises and I will share the references in just about a minute. But one of the things that I wanted to say which is really important again this is work that's been done by the cloud native foundation is when you think about defining an event or you think about defining a function the metadata associated to that event and the function is really important. So when you think about an event, you want to say what kind of an event class it is, what type it is, what is the version of the event, what is the identifier of the event, what are the sources, what is the identity etc. As you can imagine this is really important because if you are using any kind of a framework that takes the event and binds it to the corresponding function that is going to be executing to that request you really need to have the means to bind the two together. So obviously the event will really be tied to the source, the function will really be tied to the function as a service model and when you are developing a framework you want to be able to say how did you really associate which event to which function and having all of this data and metadata is going to be important because you are now dictating the behavior of these functions based on things that might really be happening. So as you can see in the various definitions of the function it in turn tells you who is the function handler. So when the event arrives, the function handler is the one that is processing the event and routing it to the appropriate function. It in turn is going to have a bearing on what runtime the language is dependent on, what code and dependencies does it depend on, the environment, the runtime behavior, et cetera. As you can imagine when you want to externalize the behavior of all these functions having this in a well-defined attribute class is going to be very helpful because it really allows the developer and the operator to bind these two constructs together. So now we have talked about the basics of how the events and the functions come together the question that begs to be answered is how exactly are we orchestrating the functions at the back end because as you can imagine a simple event could trigger one function or it could really trigger a combination of multiple functions. An event could basically be executing multiple functions in sequence or it could do it in parallel. Both are perfectly valid. So you can also see the daisy chaining effect where the result of a function could in turn trigger another function and you really need to be able to manage the daisy chaining at the back end so you can really manage all of those functions and events one after the other. And again the URL that I list here below pieces out the many patterns that are being documented that address the final grain details about how to build sophisticated enterprise applications that can really benefit from all of that thinking there. The key takeaway from this slide is if you basically have n events and you have m functions you clearly have the cardinality of n to m and you want the programming model to help you bind the two together. Now the key here is that if you look at the many providers they all provide tool kits that will help development teams go about developing and deploying to this framework. What I'm showing in this chart is the various cloud providers who in turn have really been partnering with this company called Serverless. They have both an open source framework as well as a commercial framework that allows you to abstract away all of the idiosyncrasies of the cloud provider so when you program on Serverless you in turn are assured that you can deploy to one of the many cloud providers that they have already worked with. So for example if you are currently on AWS you could choose to use the AWS framework which is available, it's called the Serverless application model or you could choose the Serverless framework available from this company Serverless what they have essentially done is they have taken the sequence of activities that you would have to do interacting with AWS and codified this into the framework so what you essentially get is if you are developing applications or products that are going to be deployed to multiple cloud for example that is the perfectly valid business model what this really gives you is the ease of development because once you have trained your developers to use the Serverless framework that in turn abstracts away all the complexity associated to the cloud provider now I would love to see more of these because that is definitely good for productivity there are pros and cons of this now when you really have a foundation like cloud native compute foundation taking all of the foundational architectural attributes and addressing them so that you can really build upon it we would love to see frameworks like Serverless framework support cloud native computing foundations work so we can benefit from that standardization so there is another implementation that I am somewhat biased towards which is the model called Knative Knative is the Serverless implementation from Kubernetes community so what you see here is the various personas interacting with Knative and Kubernetes the reason I tend to favor this is all of this is open source and Kubernetes is widely accepted if not becoming the standard for all of the cloud orchestration so what this basically shows is if you are dealing with the developer persona who wants to build and deploy they can develop all of those capabilities using standardized APIs and Knative has the means to build and deploy your functions on to the Kubernetes framework so you can either leverage an existing application that might be deployed as a container or you can choose to do this like the functions that you see from the other non-Kubernetes implementations what this really gives you is the best of both worlds you have the operators we take care of all of the deployment and the management so there are two constructs called eventing and serving the serving basically takes care of all the aspects of exposing the functions to your end users at the top and the eventing component takes care of all of the plumbing that you would need to do with all of the providers all the sources of events and all of the things that you would need to do with the developer persona the developer persona the operator persona the end user persona as well as the contributions from the community at large that is continuously developing and improvising on top of this framework by building everything on top of a novel source platform now the advantage of such a framework is that you need to have this deployed on-prem or on the cloud or for the matter of any cloud vendor because every cloud provider is supporting Kubernetes so by having your applications run time basically be mapping to Kubernetes you benefit from the support of every cloud provider who is committed to Kubernetes so it gives you extensive portability that you all want so I would love to say that this is a thriving space that you can see a lot more innovation just as I like the Knative implementation you can also see implementations from frameworks like spring which is developed spring cloud functions which can actually be built and deployed on to Kubernetes as well so whether you are a shop or a Node.js shop or doing things with Python there are extensive support available to develop and deploy your events and function as a service on to this platform by working closely with a standard group like cloud native compute foundation I would really expect to see the community standardize on these technologies in the coming months to come so with that I would like to thank you for listening and love to take any questions