 So, okay, I'll start. This one is about to deploy a pro-show program on AWS Lambda. Actually, it's my personal project, which is retweet stack overflow questions to tweet. So it's the account. If you are interested, you can follow. So it's all about closure and closure script questions. Is it yours? Is it yours? Oh, what do you mean? Mine. I'm not only a maintainer. So some backgrounds. First is a tweet bot. And previously, I put it on digital ocean. And also, it uses Chrome job to run every two hours. So I pay $20 instance, both my blogs and the bot itself. Later, something changed. I have my children came to the world. So I really no time to maintain my blogs. So I don't want to pay the $20 anymore. So yeah, I think I need to move it to a cheaper solution. So I saw AWS Lambda. So I'm thinking how difficult to move a closure code to AWS. Actually, if you see the code here, previously Java run the job. So then I do some research and find, actually, it's quite easy to do so to move all my codes into AWS Lambda. So there are around five problems need to be solved. The first is Java in top. Because AWS don't have a native closure solution for us, but it has a Java solution. So closure provides us very strong ways to interact with Java. So that's not a problem. Then also with some help of the AOT. Then what my bot do now is every two hours, it will query the Stack Overflow API then to get all the list. But I saved last updated time. So it's only query from two ranges. So that's a problem. Because in my instance, I save it in a file. But to move to AWS, I need to use AWS solutions. I cannot do like in my local file system. And also for my credentials, also are some AWS problem I will talk later. So I will go through these solutions one by one. Any questions? So first is Java in top. I don't talk too much about how you call a Java function or use something. For here, what we use actually is a gen class. What it provides us is it helps us to create some classes for we can specify the name, also the method. So for example, in my case, I want to make it a Java class called Tweet Publisher with some namespace. Also, I specify these methods, publish something. And then in my namespace, I only need to define this method as dash publish. So Cluture will solve everything for me. It will generate a class with this package name and a class name, and also include this publish method. Here, I only mentioned a bit. Actually, this publish previously, if you are using the AWS Lambda, it will give you the events. It's passed it into a map. When I first am doing this, I use publish strings and a string. I want to get the event in a string format, then pass myself. But I failed. AWS do the default way. So you can only, if you need some events, you need the maps to get the map. Then call it to get the things. So any question here? Why is it dash publish not just publish? No, that's the default naming convention by Cluture. So if you want the methods here, you need to use a dash. It's used by Java, but in the name space itself, you need to specify a dash. And also, if you don't care the package name, you also can use the name space keyword, the gen class, instead of you call it directly. This one provides me the possibility to rename the package name of the class. So you can see my, actually, this is a code I moved to Namda. And I did not change any of my logic just to add this part. Then it works immediately. So then I need to mention about AOT. AOT is called a head of time compilation. I'm not sure how many of you use it. It's also a very cool function. Actually, it's export. It will compile your Cluture code into the Java classes. So in my last job, we use it to improve our startup time and also some performance. But if you are using the Cluture dynamic class load and you need some dynamic function, then you may need to use it carefully. So you can see, after I upper-jump my codes, actually, it will generate the classes below the CLJSOAWSNAMDA, which I specified as the name. And also, it will compile all my Cluture codes. OK. How do you set up AOT? Which one? How do you set up? AOT. OK. Let me show. It's very easy. If you go to my project, it's on GitLab. So normally, we only do it when we upper-jump. So when you set AOT all, actually, it will do everything for you. And also, you can have other ways. For example, you can specify your namespace function. Or you can set AOT to give the namespace, which you want to do the AOT. It will find all the dependencies and do all the things for you. OK. Any questions? No. So that's made me very comfortable. Actually, I don't need to touch any of my existing codes and the logics. Then I can move it very quickly from the command line to the AWS. So that's the second problem I mentioned. That is the S3 access. I did not use the database or something. I just want to save a timestamp, like a line of the files. So I come to the S3 solutions. So Absonica is a great library. I'm not sure whether you use it. I think it covers 70% of the functions of the AWS SDK. So you almost can do everything with this. In your repo, you even can run your AWS to don't use the console. So you can build your EC2. You can access your S3. You can do almost anything normally you do in your consoles. So that's very cool. So you may encounter some problem with where you use Absonica. What I encounter is dependency problems, because it implements almost all the Amazon functions. But I only need to S3. So what I try to do is when I set the dependency of the Absonica, I will exclude the AWS and Java SDK, DynamoDB, and to add the dependence separately in myself. So it will, how to say, minimize your jar also. Otherwise, it will download all the Amazon SDK jars into where you do the package. Sorry. And also the codes use hotspots a bit slow. The API is also very clear and simple. What I do is I just put an object or what, then get to the object when I need it. OK. So any questions about? OK. Any tips on how to come up with a good set of exclusions? Sorry? Suppose when you're dealing with Amazonica or any other big monolithic library, do you have any tips on how to come up with good exclusions? OK. Actually, there are two kinds of exclusions. Normally when we are doing the crucial projects, we can set our global exclusions. Like we will cope with every project. We exclude the closure itself. Because in our project, we will specify to use closure 1.9 or 1.8. But many libraries, they may not update to the latest one. They are using some older versions. So it will give you some troubles. So we will exclude the closure itself. And also some logging libraries. We find it gives you a lot of trouble. And the commons libraries. So that's when you're doing some projects. That's the thing you may consider to do. You're doing some global exclusions of the closure, some logging, and the commons. Yeah. I'm asking about the lane as the tree function. So it can list all the potential conflicts. And then you can resolve the tree. Yeah. That's one. Normally, I only use that. I find something that does not work. I encounter some problems. For example, some Jackson library, some codes library use some very old one. So cost my code totally broken. So I need to do it. Normally, when you encounter such kind of problem, you can run the tree or use a boot shoe dash d to find all these conflicts. Then to solve one by one. But normally, you don't need to. Any other questions? OK. This one is about credentials. Actually, previously, I do is I export my credentials in the machine instance as an environment variable. AWS also provides you with such things. You can set in the console. You say, I need these environment variables. So when it brings up the lambda function, it will inject or export all these environment variables for you. But when you are using Closure, we all know there is a great library, the environment. So for example, this one is for the Twitter OOS. So I can use the environment my variable name. It will use a small case. But in the back end, normally, we use a shell format. We will use the up case one. But the environment will automatically change. It will use the up case. Any questions on this? What do you do when the token takes file? What do you do when the token takes file? Actually, it's quite long. I didn't really encounter the problem yet. Because this one is an API token. It's quite long. I think I already run it one and a half years. Never expect. OK. That's all about Closure things. So any questions on this? No. This one is about the lambda thing, the AWS, which I want to mention a bit. The first is the commands, which you can use to create your function, delete your function, and update your function codes. So you can see, you can set the function name, its role. Role is a bit important one. Because it will impact your function's capability to interact with your AWS infrastructure. So when I set this lambda exe state role, it can access my S3 and to access my CloudWatch. So if you did not set it properly, you will have some access problems. For example, you cannot update your object in S3 or something. So that's why when you are doing this, you need to pay attention to it. Any questions? For example, this timeout is set a timeout time for every function wrong. So when it's more than 10 seconds, it will take as an error. Your function does not work. So it will give you some error logs and some error statistics. You can see the handler is the Java classes and the method which I use Jain class to generate it before. This one is about the CloudWatch. That's the thing I used to replace the Chrome job. So Amazon can not run automatically. So to simulate the Chrome job, I need to use the Amazon CloudWatch. You can put some rule to run something like every two hours, every one hour. Then you can add the permission. Say I need to call these functions. Then you put the target. Then it will run as a Chrome job to access your function so you can do something. You don't need to put it into the machine instance. You can call it from the function. So the benefits. Now I pay zero. So I don't need to pay anymore because Amazon gives me a quota much higher than what I used. And also it gives me a very nice counsel to see all my settings. For example, after I finish my settings, I can see all my lambda function could be triggered by the CloudWatch events. It can generate logs and can access my S3. So I can see all my functions and its structures. And also it can give me a very nice dashboard. So after I deployed my function, it will show me, for example, when it's be caught, what's the duration it has been run, and how many errors encountered. Normally it tries three times. Then it fails, then fails your task. So you can all see all these in charts. And also you can go to the logs. It will list all your detailed logs. OK, that's all about my talk. Any questions? Actually, any thoughts you want to? You can also try some such kind of drops you can put to Amazon AWS. Are you on some kind of a free trial right now? No, my free trial period has ended. But the Amazon lambda now, I'm not sure whether it's a promotion. It will give you, I think, quite some. Can you do that for celebrities? Hm? For celebrities, they will do that. Exactly. So in normal use case, I don't think you will use up it. OK. OK, that's all. OK. OK. Thank you.