 Welcome to the Chainlink project update. My name is Lin Xian Kong and I work for Catalyst Cloud in New Zealand. And I'm also the current project PTL of Chainlink. So before I share the project update information with you, may I ask you some questions? So if you have ever heard about Chainlink before, please read your hand. Cool, nice. So if you have ever deployed Chainlink in either DevStack environment or your other testing environment, please read your hand. Okay, expect it. Yeah, so anyway. Chainlink aims at providing a functional service in OpenStack and you can also call that a service platform. It was initially created by Catalyst Cloud in 2017 and was later on improved as OpenStack official project in the end of the year. We have our first official release in Rocky and after this DevPsycho, we have the second one. And as for now, there are two companies. The Catalyst Cloud of New Zealand and the AW Cloud in China are the two companies who have done most of the contributions to the project. So as a small project with a small group of people, basically we have completed most of the basic functionalities in last release. So this is what we have done in this DevPsycho. The first, basically we have been focused on several things such as the feature enhancement, the documentation and the security, and also the UI, the dashboard. The first is Python 3. For Python 3, I mean not only the Python 3, the code change for the project itself but also the Python 3 runtime. In Chainlink, a runtime is the environment in which the user's function is actually running. We all know that Python 2 will be dead in 2020, so there will be a lot of more code written in Python 3. So in Chainlink, we added a Python 3 runtime as a reference implementation and also use the Python 3 runtime as default in the tempest test in Jackings. And the function timeout. So I think as a user or developer, you really don't want your functions to be running indefinitely, which will cost you a lot of money. And in Chainlink, we added a field, a parameter for the function creation. So the end user can specify a timeout value when creating the function. So when the timeout is reached, Chainlink will terminate the function and the user will receive a timeout error. You know, you can just specify the timeout when you create a function like this. Yes, a second, seconds. Yeah. And we have spent a lot of energy and a lot of time on improving our documentation. So we provide a more detailed installation guide for either developer or the cloud operator. So the developer can install Chainlink, I mean in the all-in-one environment by using the DevStack. And for the cloud operator, we provide a manually installation guide on Ubuntu 16.04. And we also provide a configuration guide for the cloud operator. So the Chainlink can be configured to talk to the existing Kubernetes cluster. Rather than, you know, to install, to set up a new Kubernetes cluster from the scratch. And also we provide some cookbooks for the end user. So the user can use Chainlink by following those guides step by step. In the security part, we supported TLS for Chainlink to talk to the Kubernetes cluster and the SD service. You know, in most of the deployments, especially in the production environment, usually the Kubernetes cluster is exposed by HTTPS endpoint. So in order for Chainlink to talk to the existing Kubernetes cluster, we add some config options in the config file and some code changes to support that. An untrusted image type function. What does that mean? So in Chainlink we support three types of functions. The end user can either create a function by uploading the code package directly to Chainlink or can upload the code package to Swift first and create function by specifying the container name and object name. Additionally, we also support the image type function. So the user can specify his own Docker image for creating function. So which means the user can run the function written in any program language, so which is very useful and powerful. But that brings some security concerns to the cloud providers, especially in public cloud because allowing the user to specify his or her own image will make the cloud operator uncomfortable. So we all know that the security is not safe, right? It's potential that some malicious user can, to provide the special designed image and try to escape the container and to do some things to affect other containers or applications running on the same host OS kernel. But however, thanks to the Cata containers appeared in OpenStack Foundation or some other technologies like Google Advisor. So they just provide some technologies which provide a higher isolation level than containers. So in Chinlian, we could support the image type function by leveraging those technologies, even that maybe there are some vulnerabilities in the image itself. But the vulnerability, by using the Cata containers, something, technologies like that, the vulnerabilities can hardly escape the function and affect other containers or applications. And the last thing we have done is the dashboard support. So user can just click the buttons without rather than input some parameters in the command line. So as I mentioned, we are a small project and the service, the project, the Chinlian self just works. So basically what we have done in this DevCycle and later on, I'd like to show you two demos, depends on the time. The first demo is Chinlian integration with Cata containers, just like what I mentioned just now, we support the, we allow the user to specify his own Docker image to run functions. So here, sorry? Where is the Docker image hosted? In Docker Hub. Can it be in any other place? Yeah, any other place that the Chinlian can have access to. Yeah, so here I have two virtual machines. The first one, I have installed all the Chinlian controller services, the Chinlian API and the Chinlian engine. It's a little bit clear enough. Okay. And in the second one, it's all in one Kubernetes cluster, but with two different runtimes, the RunC and the Cata containers. So before I create image type function, I just want to show you, to create a normal function. So as a demo user, first I will check if there is a, they all just allias for OpenStack, because I have already created a run time, a Python 3 run time. And I think there is no functions here. Yeah. And first I will create a normal function. First I will show you, because I have a Python code here, it's very simple, just a print, print, hello, Berlin. So I will create a function using that file. And for the function creation, the first perimeter I need to specify the runtime. Runtime ID. And also the code file itself. This is code file. And also I need to tell chaining the entry. The entry means the module. So if you are, if you know how to coding in Python, you know, the module name is the file name itself, and also the function name. Okay, that's all. Just runtime and the file and entry. So there's a function created. And the next I will run the function. So in chaining we call that an execution. And the only required parameter is a function, ID or function name. But before I run the function, let's first check if there are some Cata containers running now. And also we will check the run C. Okay, there's a lot of run C containers. We only care about the number. Oh, sorry. So there are 27. And then I will create the function execution. And without any perimeter because the Python code just used the default perimeter. And okay, now the execution was successful and we can check the log. It just printed the start execution and then the function output and finished execution. So now we check if it's Cata container or the run C container. So we still check the Cata container now, right? And the run C, I think there will be still 20, or still 27 because we have, we already created the runtime and the runtime under the hood. There are some containers running, already running. So if we take a look at the Kubernetes port, we can see there are already some port, yeah. So now what I'm going to do is create a function using the image command line. So this time we only specify the image, the image name, right? And we also specify the timeout because we don't really want the function to be running indefinitely. And we just create the function. Now, just as the last function, we create the execution, still the function ID. So what I'm going to do is to pull the image and execute the image according to your code. Inside the image. So now the execution was successful. We still want to check the log. Let's see if there's some, just a print chain in because that's a function inside the sample image. And now if we look at, if there's Cata containers created, the command is just stuck. Okay, anyway, first we will see if there's more runcy containers, still 27. And we just leave the command, okay. And we will check it later on if there are the Cata containers created. But the basic logic is if the user create function using some specified image, in chaining it will just create the image in, you know, treat the image as untrusted. So if you use Cata containers, the image will be running in Cata container if you use gVisor, it will be running gVisor. And also I want to show you the chaining dashboard. So here I have another environment. So this is another chaining installation. So I also created the Python 3 runtime. And I think there's no function, yep. So first we just click the create function and we give it a function name because I'm going to use the same code file as the file just, I created just now. So the name is going to be hello Berlin. And we use all the default values for description CPU memory and the code type is package and the other code types are swift, object or image. So we are going to upload a code file, a zip file. So how the zip file is created, I will show you. Oh yeah, so just return the result. So you can see there are two more Cata containers created. So back to the function. So I'm going to use the same code file for the dashboard demo. So here I have, I think the content is the same. Yeah, hello Berlin. And we will create a zip just to, if you are familiar with AWS Lambda, you should follow the same way. And the zip file is going to be hello Berlin. And we want to package all the Python files in this folder. Okay, now we have a Berlin dot zip. We go back to the dashboard and we choose file. Okay, here. And also we need to specify the entry and the runtime, we only have one runtime available. So now we create function. And if we want to run the function, we need to create execution by specify the function identifier because we only have one function, so it's already selected and input nothing. But I will show you how to specify some input later on and we create execution. It's status is success. And also we check the execution log. You can see the same log. So what if we wanna specify the input of the function? Because just now, if you looked at the code, you can see it to receive perimeter code name. So this time we will create another execution by specifying the input. For example, open stack, we create function again. So there is another execution created. And if we check the log, it should print hello open stack. Okay, so this is a demo that I wanna show you today. And looking forward to the future, there are still a lot of more things we need to do. The first is function metrics because we as public provider, we need to charge the customer according to the function resource usage. For example, the CPU or the memory. And also the function metrics can help the function developer to debug his own code, right? And also retries how to handle the exceptions properly, either inside chaining or outside of chaining. And advanced auto scaling. Currently in chaining, we only support the basic auto scaling policy based on the execution number. But in future, we really want to support some other advanced auto scaling such as the CPU load, the memory usage, et cetera. And the containerization, because in POP, in Catalyst Cloud, we are in the transition of, you know, to switch from the Dabin package control system to the containerization. So there's no reason we shouldn't support that. And the local function testing. You know, as function developer, you don't really want to test your function by creating the actual executions in the public cloud because it's expensive, it will cost you a lot of money. So we are figuring out appropriately to testing your function in your local environment rather than create the actual execution in the cloud. And we are also looking at integration with other ecosystems such as cloud inventory in CNCF and also the serverless platform. And I think currently the three big giants, the AWS Lambda, the Google Functions, and also the error functions, they already support the serverless and also the IBM OpenWisk, yeah. So the next step for us is to support such integration with other ecosystems. And if you are interested in chaining, feel free to contact us either on ARC channel or sending the emails on mailing list. And tomorrow afternoon from 1.40 to 2.20, there will be a project onboarding session and I will talk a lot more about the details of implementation in chaining and you are all welcome to join. So that's all I want to share with you today. Any questions? Yeah, please. Sorry, I didn't follow you. It's secret to give as an input to the function. You like to make sure to give to any other services? You mean any security way to, for the input, for example, if you want to input some password or something, for now, no. But in future, if there are some credential information you want to input as a parameter of the function, maybe we can integrate with Barbkin or something else. Yeah, you can just store your information in Barbkin and chaining just to, you know, to try to get all the information from Barbkin and as a function input, yeah. Please, yeah. Sorry, I really didn't follow. Is it possible to split out Kubernetes? To split out. Do you want to just use the... That's the first version of implementation of chaining without any container obfuscation platform. But if you, for example, we also think about to integrate Zun. Yeah, Zun only support containers, but when you integrate Zun like that, you still need a layer to control or to manage the containers you created for running your functions. You still need to, you know, to queue the containers and to create new ones and also how to manage the garbage of the containers. Yeah, so you still need a layer like Kubernetes or Swarm to do that. Yeah. Yeah, yeah, I agree with you. Yeah, I agree with you. I also have discussion with Zun PGL that he wanna integrate Zun with chaining, but we still need to develop a new layer to manage all the containers. Yeah, please. Okay, the last question, please. We are running out of time. Yeah, yeah, sure, sure. The last question, yeah. Function several times. As long as I use the same execution, I call the same container the same part three times, or does it get started each time again? Okay, the question was, if he wanted to run a same function for several times, if chaining will create new containers to run your function, was that your question? Yeah. Okay, yeah. So for the same function, chaining won't create extra containers to run your function. For example, the auto scaling policies is for that purpose. For the same function, if you run that for, for example, three times, it's only one container, and the container can be, you know, to run your function several times, but only one container. Yeah. Is it automatically, yes. That was collected after some time? Yes, yes. Okay, yeah. Thank you, thank you. Yeah. Yeah.