 Introduction to myself from IBM, China R&D Center, mainly working experience, open source community, the beginning of the office then to open the stock in 2016, and afterwards focus on R&D of serverless platform in the beginning, in the open Apache community. Starting from this year, I began to notice on another project. So I'm going to focus on the building of a Kubernetes platform. These are the topics I will cover today. First of all, introduction to serverless and Apache OpenWISC, next to share with you about IoT applications and the representative architecture and features also within this architecture and features, what applications can be delivered for serverless. The fourth part is to connect the events into IoT on the Apache serverless. So first of all, what is serverless? Two different explanations. One is functions as a service. Another is backend as a service. Service refers to this small segment of a code and execution based on these. And the scalability based on these, no need for the administration for any relating infrastructure parts. And also the event driven calculation. So now most manufacturers refer to functions as a service for serverless. But meanwhile, it also covers the wider range for concepts which is backend as a service or BIOS, which means the third part of this API service and it will help to achieve the fundamental function modules in the development and no need for administration and it will automatically scale up like API services. So be it for functions as a service or backend as a service refers to administration free and scalability based on these. So if for functions as a service, this would be the gateway driven and event driven. And scalability and payment will be based on these. In the beginning of the cloud computing, we always say that this is like consumer water or electricity. And actually now this cannot be achieved. Within one week, if we call three times for the service, then this service will be continuously operated. But if we have a serverless, only within these three times of calling, the service will be installed onto your computing platform. So this achieves consumption of the computing resources just like consuming water and electricity. And we can achieve fast iteration, fast development because what you face is the holistic cloud platform. And if you need it, it can execute. This is also friendly for developers and bring new development experiences for them. The developers can start from their own role to transfer their role to the 10-year-old primary school student or even to the 20-year-old artist because what they need to do is very simple. And they can finish the code and then to run it on the server. Also, they will expose the HTTP address to the others for use and also payment charge is very low. So what is the Apache OpenWiz? This is the event-driven and code-executed PAS platform. Now the Apache community has this incubation project. Also, this has been verified on the public platform of IBM. We also use the same core lines of code as OpenWiz. Within OpenWiz, they have the concept or philosophy of coding. The event works as a trigger, so you need to put your main businesses into this event. This is the segment of a code, be it a Python or Node.js. This is a function. You can define an event source after it happened. This will be triggered. So this is a philosophy programming. Some concepts including trigger rules because you need to contact trigger an event. So this is rule. And action is your code and also package the same. It can pack the different things within a package. So the main application scenarios, every scenario naturally belong to a kind of micro-service because it runs at the server end. It also has a dress for you to call after calling. It will execute for one to two minutes as a service. So we can regard this as a naturally micro-service platform and also for IoT platform. More details for you to have a deep dive. Also, this is used on DevOps. So after the trigger, it will generate the image and then to trigger it further. So this is the process for DevOps. And these are the very representative scenarios. Next, let's have a look at IoT, which is Internet of Things. Now this is a trend, which means the different physical devices can be connected through network and can be programmed. So you can imagine this. For environment sensors, these are deployed in every corner of the world to measure the air and environment. And then to upload this information to the cloud for integration and also your wearable band device can also measure your biometric information. And also we have the industrial computer to integrate the information from different metrics in the factories for error detection or forecast detection. So we have the V2X. This is also a part of the IoT. Big snow cars can be connected. In addition, I believe if you use Xiaomi devices, maybe you have Xiaomi smart home devices. So these devices can be connected and onto the cloud, just the air conditioners or your fridges or the other home appliances can be controlled online by the manufacturers. And now we have the basic control for the temperature control and having rich smart level. In the future, we will have more popularization of the smart devices. And we will reach the IoT atmosphere for real. So what are the characteristics for IoT applications? First of all, this is a big data attribute and also the heterogeneous because we have millions of IoT devices and generating millions of data normally connected by gateway. Also on the gateway, it will have a preliminary processing, maybe cleaning or transfer of format. The data processed by gateway then migrated onto the cloud for storage and analyze. These four refer to the data flow in the internet or the process, the flow. So the heterogeneous big data would be connected by gateway and then uploaded to the cloud and then for storage and the analysis. We cannot make forecasts. For example, when to receive the data and when will we have the anomalies and when to transmit the data. If we have IoT processing services 24-7, will we wait here for the data to execute or we can use a serverless capability. When we have the data, it will trigger. By doing so, the resources of computing costs could be reduced. So this is the serverless and it's typical applications in the IoT scenario. This is a very typical IoT application architecture on the basis of IBM cloud. The most left side, the most right side, is through MQTT devices uploaded to IoT platform. And down below is the message hub. Message hub will have two directions or one direction to store it on the object storage. Another, we have a streaming analysis of analysis and streaming analysis would have real-time processing. Also, we will have visualization machine learning capabilities or we can mine something in the historic record belong to data science. This is only a very simple example case to share with you. So we end this example. What are the potential areas for serverless to play or to exert its advantages? We think that serverless can play a very important role in these four different areas. A part A, the object will go through message hub and transmit it to object storage. And before this, we can insert a segment of code and this code because the message has been transmitted so it can be triggered by this event. Afterwards, we can get the message hub transmitted original data and then this could be filed, this could be archived. Also, we can have formatting or the other processing and this would make it easier for storage and analysis. Another scenario in part B, after storage of data, OpenWIS can use ETL as a method for processing. To process the data or to search for the other information to augment, enhance the data. For example, the data relates to third-party data only have a key value. So here, according to the key value, for the third-party data and to enrich it, after this part, this will be used for driving the machine learning and AI. So this is what we do after storage relating to before entering the phase of ML, machine learning, we can insert a segment of serverless code for the server processing. And this could reach the trigger on time. Let's have a look at the real-time data processing, which is flow data detection. We also can use OpenWISC for execution for serverless actions, serverless functions to execute the specific tasks for triggering the feedback. For example, in this flow, we can intercept the specific events for processing. For example, on the street, on the road, the vehicles have V2X if we discover congestions on the road. And when the congestion reach a certain threshold value, it can trigger the scheduling of the traffic light to avoid the congestion on the road. So in flow processing, it could be used to process the specific events. The last scenario, the edge computing serverless capability. So on the edge, the serverless can also play its role. Maybe you have heard of edge computing widely. So hereby, we can also leverage the serverless capability by doing so the edge and the cloud can enjoy the same capability, but now still in R&D. So this is a trend for the future. This is a real example case. From IBM, a real client scenario, what they need to do is like this. Inside their company, they have developed some specific devices for the capture of audio messages and audio data. We want to detect the abnormal scenarios and trigger alarming. This could be used in the baby city. For example, if your baby cries on the second floor and you're on the first floor, then you will receive information on your smartphone saying that your baby is crying. Or in your home, especially smart home. When you hear some noise, when you're sitting in the very quiet environment, maybe it will send you a alerting information. So on the right, the data would be captured by this device, made by this manufacturer on the upper right. And the audio data will firstly go through two different flows. One is enter IBM, Watson, IoT platform, another object storage platform. The object storage platform has a function. These three parts, one is orange, one is blue, another is brown. And in the brown one, it is the function run on the serverless platform. And then it is to analyze the process itself to see if the sound is normal. If there is any anomaly, it will process a warning and then it will trigger machine learning scenario. This machine learning scenario means if you analyze a certain scenario very often and you detect the sound of the rain in it and you can remove the raining sound. This could actually be done by machine learning randomly. And then the brown part is for sound progressing to augment the sound or to remove the raining sound, which is learned through machine learning gradually and after removing the raining sound. And if any sound is detected, it will trigger the warning. So in this scenario, actually it is adopting scenario 2 and 3. It is to monitor any anomaly and to give a warning. And in scenario 2, after the data storage, it will also have a data augmentation. Then Shane is talking about data than the cat event because our IoT platform data is important as well as events. On Apache OpenWISC, the event processing capability is built and we can see the package on the OpenWISC. How many packages there are means it's capability to deal with these events. It could monitor the Kafka and also through the pushing notifications to trigger. And it could be integrated together with Jira and it could trigger event in DevOps. And it also has RSS subscription and when the subscription is updated, it could also trigger the event and it could also be integrated with the Kush DB and when the DB is updated, it will also trigger the event. So for IBM Cloud with some development and integration on the IBM Cloud platform, the events could be processed including the DB and the event stream which refers to the Kafka message and the customer trigger means, for example, I have a message sent to a specific rest for and it could also trigger the event or with some mobile push to trigger and on GitHub if anything is updated, it could be triggering the event as well and there will also be the periodic triggering. These are the event processing capabilities on the IBM Cloud platform now and if you want to access the event on the serverless platform, you want to figure out how and now for most serverless platform, they would support Kafka so at least you can send a message on to Kafka and any message on Kafka would trigger a function and for OpenWISC platform, we have a customized event access model. For example, you have an event monitor and whatever you monitor an event or a function, if it is for a long time running, the monitor would definitely detect trigger and then you would program a function on the serverless platform which is called an OpenWISC action and then it would be transmitted to OpenWISC platform and then it could be mobilized in two ways through API or through URL and for most serverless platforms, they could be capable of this if you translate a function on to it and it would feedback you with a domain name and through the domain name, you could try to mobilize action the first part of your job is to develop an event monitor and the second is to develop an event processing logic and the monitor would be running for a long time and it could detect any event and then to mobilize the event function and with the serverless capability when it is not mobilizing, it is in the sleep mode and not any of the occupy computing results this is a very simple processing mode but of course there would be more complicated ones now this is the end of my presentation any question? any requirement for this deployment? my question is, our premise deployment of this platform any requirement for private cloud? well, I think for private cloud deployment, it's no problem because for open-wisc, it could be deployed on Kubernetes it could be deployed on any platform that supports Docker image you can deploy just the Kubernetes and open-wisc it is done but with a larger scale of computation, it might be bigger, what would be bigger? I mean, more demands for computing I mean, more requests come will it also be scaling up horizontally? yes for the serverless platform generally, the serverless means the developer would not pay attention to the server end but as the operator antenna would still need to pay attention to that if the hardware is not sufficient, we need to add more hardware my question is for private cloud we have the container deployed which could be scaling up and down as well then, what would be more special about the serverless platform in this case? well, the scaling is like scaling from 1 to n it is like this but for serverless, it could be scaled up from here to n actually, now, Kubernetes is able to achieve this it means my service does not require running here 7-20 hours, 7-20 hours but when it waits for others to implement it I still recommend the service but the first trigger for serverless would take some time because it is based on code starting if it evolves container then for the first time trigger it may need at least one second and you need to judge whether or not you could accept this one second first of all, you save the resources and when there is no trigger, it would not execute and for the first time, it might take some longer time but within 5 seconds if 50 requests come it could scale horizontally very quickly this could be achievable please hand over the microphone to him actually, for serverless platform now most not support long but for serverless, this concept has been evolving we used to talk about the serverless but now we are talking about the serverless with the status so we often hear the clients saying that they want a longer time running of it but if they require longer time for the running they could just have a portal deployment on the Kubernetes they do not need serverless platform my question is that for serverless is it similar to Google's Firebase? is it also a function as a service platform? yes seems you don't know that right, I don't know much about it okay then for serverless as you mentioned it could also run the Java script script, yes so it means you can do with any server with it you can write in any language, yes you can write in any language and you upload it onto the platform and when you need it, it could help you mobilize that language and function my question is it could be compared with AWS Lambda it is like an open source so it also makes use of the Docker yes, for Lambda it's not Docker so if you have a new request from them there is one you could have Docker and also call starting yes but we have some optimization we could prepare some container pool in advance if you need a Docker image and run a Node.js application on the Docker image you could in advance start the Node.js environment for you and this environment is already except for putting into your program but it would occupy some resource and you can just put it there, make it ready to reduce the cool starting part but actually it is not cool starting it is to optimize it, yes well, there were some people asking about whether or not you support all image, yes then how about the premise loading if you upload the image you could not realize on premise loading they are actually two modes for image we call it black box for container it is just to lift it but we don't know what's in the container now Kubernetes also making the service platform yes, so how do you compare well, for IBM we invest we input into both communities and I think they would both exist in the future my question is not very relevant to it but I want to ask for different language, round time the starting time would be very largely and how would you optimize this well, this is a very good question I would like to share with you some of my view points and I know that for cold starting time it's mainly spent on lifting the container but for Node.js and Java Java would be slower than Node.js but I think the time is still very minimum compared with the container starting time so it could be neglected I remember you started this project back in 2016 it's three years past and how do you see the future direction for a serverless platform any trend, I think at the beginning we talked about to save the resources with a serverless platform because it could adopt the pay as you need, pay as you use mode but when we talk about a serverless platform future, in the future I think it is too often new experience to the developers that the developer could enjoy some cloud development capability more so serverless, I think it was initiated in 2012 and by 2016 the evolution of it is not finished and it will keep on and personally speaking the future cloud platform will become a cloud OS programming on cloud will mobilize a lot of APIs and it will be supported by the cloud capability to realize your logic but now if locally you write a Node.js script and some image processing capability might be from the local but if it is on cloud it will be based on the cloud capability so I believe in the future with the development of the serverless platform it will be more of a cloud OS another important trend for serverless would be eventing the event and the cloud event in addition to the OS capability it will also have various events to trigger I believe that is important direction for serverless development you mentioned eventing which is an important direction for serverless future now for CNCM they have a cloud event and do you think that is related to eventing here for serverless I know about cloud event which is for cloud native and for OpenWisk we now do not have the plan you also mentioned the cloud native and IBM participation into it IBM in the cloud native community has put a lot we are involved in all blogs here time is up any more questions? ok now we close the session thank you for listening