 So, hi guys my name is Gayan, I work for a marketing automation startup in Singapore. I am Arjun from Sri Lanka, today we will be talking about how we adapted serverless for certain aspects of the application. So to start with I am pretty much sure that all of you guys are aware of the serverless offering at AWS which is Lambda, but there is just more than one service that does serverless. So today we will be talking about AWS Lambda itself. So in general our application is a PHP application, it is a monolithic application which follows general design patterns and MSI architecture and all that stuff. So what you find here is an average structure that we had earlier. So you get the ELB, the cache services, the database services and application servers. So one issue we had with this was, if you are close to look at this, most of the applications today use offloading tasks to workers which actually reduces the load on the application servers. So your web servers actually perform the way you want. The traditional way of doing this was to use a messaging queue where you actually push up a message and just continue with the general web service, web service actions. So then there is another EC2 instance or it could be just any application server or worker server which will process those individual messages. So the problem with this is, imagine you have about 100,000 messages pushing into the queue every day. So you need to make sure that these get processed immediately as soon as possible. The other thing is you need to spin up a number of workers to make sure that it gets processed as soon as possible. So the downside of this is you spend money on spinning up the instances which is going to be running for a while. The other thing is there is actually a delay in processing. So if your queue has about 100,000 items, it's how soon the worker can process an item is the speed of the completion. So in general, this is what we actually had the problems. At least one worker was running to make sure that we don't spend time spinning up one instance whenever there's an item in the queue. The other problem was making sure that we manage configurations for all instances. So when you have multi-tenant application, you need to make sure that a worker is configured for all tenants. Then the other problem is making sure that you sync all those configurations across. There are a lot of tools that you can use for server orchestration, but still for all, it's a pain point where you need to actually have someone do that for you. Then the other problem is parallel processing is expensive. It's mainly because if you want to finish the 100,000 messages in a shorter time, you need to make sure that you spin up enough amount of workers. This is arguable because the cost is not that bad these days because the spot instance offerings and all, the cost is actually minimum. It's more or less of the overhead that you spend with the staff and the resources allocated for this. So what we changed, this use case is basically, we have used a lambda for several use cases. Our application, if you talk about the application itself so that to add context to the diagram, this is a marketing automation tool where you have connected mobile applications for each customer. So each of these customers, the most simplest explanation is analytics where whenever a customer visits, let's say, for example, if you have been to Harry's here, so Harry's is one of our clients. So the mobile application, whenever you visit a campaign on the mobile app, we actually record those views and how much time they spent, how many redemptions were done for a particular campaign or coupon. So all this tracking is done early. It was just a rest in point where the mobile actually pushes the data and we actually record it. But now when you offload it to lambda, what happens is the mobile directly pushes it to a topic that is pre-configured on SNS. And we would actually have one function taking care of the data collection. So the advantage is when you have 100,000 users pushing data simultaneously, it's quite fast and since we don't actually have to worry about the resourcing, the functions get triggered off quite quickly. So at an average, if you compare this to a traditional structure where we push all these analytics messages to a queue, then make workers process them. 100,000 users multiplied by the number of times they visit the app. It's a lot of messages. Processing it in this manner, we make sure that whenever something goes and hits the SNS topic, it gets processed immediately because you can actually make sure that the lambda function is subscribed to the SNS topic. So that's actually fast processing for this. The other thing is what we use this for, the SNS gateways. So in general implementations, regardless whether you use PHP or Java, for most SNS gate integrations, for now we have integrated with Clickatel and Twilio. So whenever we need to add a new driver, we need to actually implement the driver and make sure that the code supports it. For now, once you've changed it, what actually happens is this, for each driver, there's a separate topic which we actually send a message using the AWS DK. So nothing changes except for the topic. So each function is a standalone function on lambda, which sits for each SNS driver. So this is actually an isolated implementation of the SNS driver. So we don't actually depend on third-party libraries or anything on the application itself. So this can be managed individually. If something actually is stopped working, or if we have a bug, we have an alternative fallback. We could just quickly switch the function without actually deploying the whole application to fix that issue. So this is just the standard architecture that we use. Our plan is to basically move as much as functions possible because, when I started this, what we call as adopting serverless ones, because we have a monolithic structure right now, but we wanted to use serverless to offload certain aspects of the application. We couldn't rewrite it for microservice, although it's a hot topic these days. You can actually rewrite an application overnight. When you spend two and a half years trying to shape the product, you can actually go and convince your management that you need to rewrite to support microservices, just because it's a hot topic. So what happened was we started offloading certain functions so that we could just make sure that we decouple whatever we can, and the code remains the same. So a few advantages we had from this was one, analytics data didn't actually hit our servers directly. It was just going through AWS SDK to SNS and just data collectors were pushing it to our DBs. The SMS drivers as a service made quite easy for us to add additional SMS drivers on the run. And quickly switched between them using just, we just need to deploy the configuration because the topics changed and the configs actually changed. Right now we also use it for push note processing and processing bulk emails. Our idea is to actually move as much as jobs to serverless in the future. So, thanks. Yes, but the thing is, right now we offload to Lambda. The processing that is quite less than that. We don't actually run for functions more than 100 milliseconds right now. So, large or long processing at Cronjabs is not actually a fit right now. Although we are processing push notes, it's quite different because we have a queue which holds the items. And then there's a function that runs for five minutes. Yeah, then it continues, run, yeah, anything else? Thank you.