 Here me, good morning everyone. So for last few months, we've been working with these serverless architectures. And today I'm going to talk to you about what we did and what are the problems we encountered and stuff like that, so you can maybe jump in without falling into the pitfalls we trapped in. So what I will do is first I will talk to you about the evolution of the computing from the physical servers up to serverless computing. And then talk to you about the serverless architecture and what are the components that are in it and what are the things you could do, stuff like that. And finally, I'll be talking about how we did with serverless, like migrating from Rails 1.8 to the serverless. So early in the days, we had physical data servers. So life was pretty hard back then. So imagine you are in a country and you are running a server, suddenly you get like a hike in a lot of requests coming in another country. So you have to pack up some servers. They are to serve them better. And the server management was really hard. If one of the boxes goes down, it will be really hard for you to maintain. And then later in the timeline virtualization came things got pretty interested. The server utilization was getting up to a level which was awesome. And later the cloud happened. And from physical servers, people moved all the virtual machines into cloud. And different cloud vendors came in. And things got even interesting. You don't have to worry about management of the servers. You don't have to worry about the fallbacks. If your servers goes down, cloud vendor will pop up another set of bunch of virtual machines. And then finally the most interesting thing, Docker happened. The utilization even improved with containerization. You can use one sort of server and have multiple instances. And actually the utilization is pretty much awesome there. But still, so management, you have to do either you or one of your team members have to do. And provision management of this VM instance is pretty hard. If you are not coming from this background, then you will have to constantly work on updating these servers, like security patches, stuff like that. And if your servers goes down, you will have to work even more to get these things up and running. So this is one of our girls says, no service is easier to manage than no server. So if you don't have to manage servers, you don't have to worry about anything. So that's where serverless computing comes in. Serverless doesn't mean that there are no servers. It's just you work, but you don't care about the servers. So there are like it's where servers are there. You don't care about servers. You just write the business logic and someone else is taking care of these servers, the scaling. So like provisioning, utilization, operational management, like updates, security updates, all these things, especially the scaling. So you don't have to care about how much scale. You don't have to anticipate the scale. So if you get like 1 billion requests, these vendors will support that. If you have one or two requests still, it will support that. And so talking about these servers, it's not as like SaaS platform. When talking about the SaaS, you have a set of applications you are provided with. But in serverless computing, you have these services, but also you can do the computational part. You can manage, say, you get a request. You know what are the things you have to do. So you just write the business logic and fit in with providers, so they will do the other work. So currently, this is like a trending architecture. It's like from like two, three years back then, it's popped up, and even GitHub dedicated another like Explorer page for serverless architecture. But this may be a fact, but we are not sure. We'll have to see what will happen in another two, three years. But it's completely growing. So talking about the big players, Airbnb, Expedia, Koch, and Atlassian players like this, work this serverless architecture. So if I talk about a couple of statistics, Thompson Rauder's process about 4,000 requests per second. FINRA process half a trillion validation per day using serverless. And Vivo handles about 80% traffic with serverless. Expedia triggers 1.2 billion lambda requests. And another interesting case study is the Australian sensors website. Basically what they did was they spent about 10 million Australian dollars to get a sensor site with load balancing and the scaling, everything tested. But when they deployed it, first day onwards, the site got crashed because the scale was not enough. But a couple of guys in a hackathon used serverless architectures and tools and built an application like under $500, which catered to these scales. So at the moment, there are a number of vendors providing capabilities for serverless. The main player is the AWS Lambda. And then we have Google Cloud Functions, Azure Functions. And really awesome open source project is the OpenWiz. They are giving you all serverless background developed with open source. If you want a private fast, you can use OpenWiz. So just to get an understanding about the cost, I'll do a little comparison here. So imagine you have a system where you get 1,600 requests per day at each request take like 200 milliseconds to process. If you use EC2 boxes, two EC2 boxes, it will cost you around $3 per day. But using a Lambda, you can reduce cost up to $0.05 likewise. So in this case, we've been using AWS technology. So later of this presentation, a lot of my insights will be towards AWS. But all these things you can do with OpenWiz, Microsoft Cloud, or even Google Cloud. So if you talk about the AWS toolbox, these are the set of services you can basically do a lot of things around serverless computing. Main pillar is AWS Lambda, where you get to do the computational part. And then there are S3. You get the storage. And you can even host a static website there. And for the database, you can get RESO, DynamoDB. If you want to handle HTTP request and other stuff, and also you want to have another set of APIs and have a mediator in between, you can use the API gateway. And if you want some sort of message queuing services, you can use the SQS. And for the notification SNS, if you want to, like, the request coming up in the, whenever you get a request, you want to throw it into an analyzing. You can use a kinesis stream. And all these things, if you want to do sort of orchestration, you can use step functions. And for the monitoring, you can use extra. Like, there are a number of services you can use. It's up to you to decide what are the services you want to use on. So let me talk to you about how to get a serverless function up and running. So basic step is you have to go to serverless Lambda page. And this is where you get code. You have to name your function. And you have to, like, give a description in runtime. There are a number of runtimes introduced in AWS Lambda. Node.js, Python, Go, even C-Shop is supported. And a lot of languages are coming in. Like this, Microsoft also support a lot of technologies. Google Cloud Functions support a lot of things. AWS, sorry, Apache OpenWisks support a lot of languages. So you don't have to care about the language you use. All you have to care about what you want to do and what are the problems you have to solve. So basically, you have to code the logic if some sort of request or event happens. What do you want to do? So basically, you write your logic here. That's it. And you don't have to care about anything other than that. The scaling, everything will be added later. Forget about it. So you can assign set of triggers for this Lambda. You can say, if you get a HTTP request, trigger this Lambda. If you want a webhook or something else, trigger this Lambda. Likewise, you can assign a number of different sources to this Lambda to trigger it. If we talk about the HTTP event. So I have an API gateway here. Say I have an API where HTTP get request is anticipated. When the request happens, it goes through this cycle. First of all, you can hook up authentication service as the initial check. If you don't want, you can have it there. And if you want to alter or modify the request headers or stuff like that, you can use the second box. And then it will call the Lambda function. And then you can modify your response and then, again, pass it to the user as well. So this is the basic skeleton of Lambda functions and API gateway. So whenever HTTP request comes in, you can do whatever things you want and you pass in the response. So all you have to care about is only that part. The servers, you don't know what are the operating system runs underneath. You don't know what are the security updates. But the service provider is actually taking care of these things. So as you see, you will have to work on number of services to get these things done. So you will have to work with Lambda or API gateway or any other service. But a set of guys who work with open source technologies came up with frameworks to make this easier. So currently, there are two major frameworks. And there are a number of other frameworks. But these two are standing in. Apex is one. The serverless framework is another one. Apex is totally focused on the Amazon web stack. But the serverless guys are trying to generalize it and have a number of providers on board. Currently, they support AWS stack, Microsoft stack, and the OpenWisks stack. Currently, people are working on getting Google functions up and running. So if we talk about the serverless framework, because I have tried with serverless framework, it is really easy to get your service up and running. So all you have to do is install serverless and create a template. So you can define your, say, using Node.js template. You create a function. And you just fill up the building blocks where you type your function. And you tell what are the things you want to do. And then you just type serverless deploy. And it will create all the API gateways, all the lambdas, and stuff like that. And basically, you can either curl or you can do check all the stuff like that. And this is the basic steps. There are much more things I need. So you can go and read about it and stuff like that. Another cool thing about serverless is there are a lot of, they have their core as very simple core. But there are extensible frameworks. So there are a number of plugins available. So if you want to plug in the DynamoDB related things, you can use another plugin. If you want to do sort of Alexa related things, you can plug in another plugin, stuff like that. And there are a bunch of examples as well. If you want to see what are the things you can do and how are they doing it, you can refer to their examples page. There are 30 odd tutorials set up from basics to their boilerplates. So let's talk about a case study where we got our hands dirty with these technologies. So Kavei's application which we worked in is like a checklist management system. So talk about the architecture. We had a web client and mobile client. Inside AWS, we had our back end. So it's connected to CloudFront. And we had two different services. One is our API and other is SNAP service, which is connected to ECES containers and which is talking to MySQL RDS instance. So the technologies we were using is Rails 1.8, Ruby 1.8, and MySQL. And for the front end, we were using Angular, stuff like that. But they have a lot of dependencies which we couldn't get to update with the Ruby 2.0. So it was a lot of work to get it updated. So we thought of, let's have the system as it is. So we'll try to extract as much as possible functions out of the system, and we'll reimplement it using serverless. So we can cater them with new technologies. So we had active users using the application, so we couldn't take down our applications. So what we thought of is going to a Microsoft's approach. So we decompose our monolith application into different, different services. Again, we were sharing the same database. But from the service approach, we thought of getting into different services. And database, we extracted some of data into schemalers and different, different databases. And to make life easier, we used serverless framework. So after experimenting a lot of things, we decomposed our existing application. And we again implemented it with serverless framework as well. So we had two different set of services up and running one with serverless and one with the ECS cluster. So if a user comes in, and if he go into the API, he will be redirected to the serverless approach. And the other existing APIs will be redirected to the EC2 cluster. So other than that, the serverless approach was using MongoDB as well. And it was for the data which was already in our database, he was talking to the RDS using the VPC. So there are a lot of things coming up. So a lot of cool stuff like step functions and edge functions. So talking about the step functions, you can, if you have a set of paths like a state machine, you can use these step functions to recreate these stuff. And there's a really cool feature coming up called edge functions where in the AWS ecosystem there are a lot of availability zones and edge locations. So you can run these functions in the edges rather than going into a server itself. So whenever the request comes in, the request will be served from the edge itself rather than going into their main servers. So that's really cool. And I talk about all the good things about serverless. It's not always good. You'll have to think about like bad parts as well. So with serverless, even though it's serverless, underneath it uses containerized mechanisms. So there's a thing called hot start versus call start. Whenever your service gets up and running, it comes to a hot state. But if it is booting, if you have less requests coming in, it will automatically go into the call start. So if it is in call start to get it up and running, it may take a while. So you'll have to think about that. And there are certain currently hacks available to keep your lambda in a hot state. But we'll see how things will go. And in the future, there may be services to keep your functions hot in the all ways. And it's not advisable to use long running process with serverless. If you want to have long running process, it's better to use a container. And still, if you want to have the control of your server, it's better to use containers rather than use going to serverless. And a couple of downfalls is that there are a lot of core duplications and logic duplications happen in these serverless areas. And if your service is depending on a lot of libraries, it's not good to use serverless. Because whenever a request comes in, all these libraries have to be loaded with your container. And to get it up and running, it might take a while. So talking about runtime, it might be not good as much as using a container itself. And another key thing is you are locked into your vendor at the moment. But the serverless framework guys is trying to make it abstract, so you can switch between providers. But the switching part has not been implemented yet. So people are talking about how to, if you have a set of services written catering towards AWS, how you can get it up and running with OpenWisks stuff like that, that hasn't been implemented yet. But serverless guys are talking about how we can implement it like that. So with that, I think I can conclude my session. So if you want to talk about it more, any of these medias, you can use rhrmsh to contact me. You can send me an email to rumeshh at niderex.ok. So thank you. Do we have any questions from the audience? Sure is the open source version of the serverless infrastructure. Apache OpenWisks. It's been there quite a sometime, but I haven't played with it for long. I know it has capabilities to serve HTTP requests, and it's written on Kubernetes. So this open source version is used by IBM for their service. So I'm not too sure about whether it's catering in the same level as AWS Lambda, but it's seen a good level to serve your basic needs. But I'm not too sure about how good it is. Any other questions from Rhrmsh? OK, thank you.