 Hello and welcome to KoopiFound. Next, we are going to share how to use cloud native service technology in the autonomous driving industry. I'm Benjamin Huo, and together with me is Xiuming Lu from UEC. I'm the senior architect of KoopSphere. I'm also the creator of Open Function. Xiuming is from UEC and the architecture of the autonomous driving cloud platform. Here is the agenda. Open Function is an unclinical fast platform. So the question is why we build such a platform in the first place and how to build it. So following that will be Open Function's introduction, including several components and the early adopters, contributors, the UDMIPE and the build demo. In the end, Xiuming will introduce how to use Open Function in autonomous driving. So why we need an open source kind of native fast platform in the first place? I think in the recent years, we have heard multi-cloud distributed cloud or hybrid cloud more often than ever. I think it is because Kubernetes brings the possibility of cloud agnostic. But as for the fast or service area, it's difficult to be cloud agnostic because each cloud provider has its own fast platform. Usually, these platforms are tightly coupled with each cloud provider's back-end services. So it's difficult to move one cloud from one cloud to another cloud for the service part. So it is possible to build cloud-native and cloud agnostic fast platforms. I think the answer is yes, because the new progress of cloud-native service technologies makes this possible. I think the most famous technology in this area is Kubernetes. But today, I want to talk about another true technology that is Diver and KDA. KDA is a great product to auto-scale applications on Kubernetes based on the different metrics of different even sources. So we can use KDA to decouple the auto-scaling of applications with many, many different even sources, including the open source even sources or the cloud provider's even sources. So this is KDA. We can also use Diver to decouple the distributed application with the underlying back-end services. For example, we can use Diver's PubSum building block to decouple the events and the sub-up events from the underlying messy brokers. So the user can use open source messy brokers such as Kafka or not streaming or they can also use the messy broker from cloud providers such as GCP PubSum or Azure Unibus. So usually for a fast platform, a fast platform should support several languages and they need to communicate with many, many different kinds of back-end services. So suppose a fast platform needs to support five languages and they need to communicate with 10 messy fields. And without Diver, each language has to use different SDKs to communicate with different messy fields. So this ends up with 50 implementations actually. But with Diver, each language only needs to use Diver SDK to talk to Diver and Diver handle the rest communication with other messy fields. So we only need five implementations in this Diver. So it has greatly reduced the complexity of fast platform. So it's like adding an additional layer between the fast and the back-end services. So Diver acting like a unified interface for the back-end services. Actually a couple of weeks ago, we have shared how we are using Diver in Open Function with Diver community. And the two co-founder, Diver Mark and Yaron, they are very glad to see the integration of Diver of Solaris function runtime like Open Function. So finally, what is Open Function? Actually, there are several three components under the function umbrella. The first is build. Now Open Function supports using build packs, cognitive build packs to build function source code into container images. And the other three Docker file relevant build technology can be used to build applications for now. Maybe we can also use them to build functions in the future. The serving part. The serving is the most important part in Open Function. Now we support P-sync runtime and sync runtime. For sync runtime, we currently support using NATIVE as sync runtime, and we are planning to support KDAI ADP as another sync runtime. Actually, we have maintainers contributing to KDAI ADP project. I think the most unique or differentiating feature of Open Function is the async runtime is powered by Diver and KDA. I will explain in more details later. The last component is Open Function events. It's kind of like KDAI event, but actually it is inspired by ARGO events with some differences. I will explain later. Open Function build. Whenever a function is created, it will create a builder, and the builder will create a ship-line build and build run. Together with the build strategy, ship-line will create a tecton task run, and build steps are created in the tecton task run and executed in order. This is how the function subcode is built into a container images. The serving part. As I said, Open Function can support the KDAI sync runtime and the Open Function async runtime. Both the sync runtime and the async runtime can use Diver, and the difference is for sync runtime, we only use the output beta of Diver component because the input is from the ADP. And for the async runtime, we use both the beta of input and the output from the Diver component. And KDAI is used to auto-scale the function, he said. In the future, we are going to support KDAI ADP as another sync runtime. And to speed up the code start, we are also considering using a pod pull method as another runtime. Whenever a function needs to be created, it will take already existing pod in the pod to start the function. So this way, the code start will be very fast. So this is the Open Function serving. This is the Open Function events. It is inspired by Agile events, but the difference is the event bus in Open Function events is decoupled with the underlying message broker by using paper pops up a building block. So user can use Kafka, not streaming, or any code providers, message queue such as GCP pops up, or Azure event bus. Whenever events arrive in the event sources, the event source can call a sync function directly or write the events back to the event bus. Whenever the events are writing to the event bus, it can trigger a sync function directly or user can define a trigger to filter the events they are interested in and trigger a sync function to write the filtered events back into the event bus. So this is in Open Function events. Another important component is the function framework. The function framework, actually this is inspired by Google Cloud Function and we have our own function signature here, both the sync function and the async function can use the same function signature to define their functions. There are several components here. You can see the contacts, plug-in, the runtime, and the framework. The framework, of course, will read the function context and register the plug-ins and create the function runtime. And finally, we will start the function runtime. Whenever a sync function or async function is triggered, the framework will first execute the function free hook and call the user functions and execute the post hooks and finally process the function output. We add, you can find more details in the links below about the proposal and how the function implementations. We add the plug-in mechanism here because we want to support function tracing. Function tracing function performance is important to a fast platform. So let's take an example here. A user sends an HTTP request to a sync function and the sync function will then publish some messages to the Kafka server via a diaper set card. And whenever a message is published to the Kafka server, the async function will be triggered through the diaper set card subscription. So for use cases like this, the user need to know how long the sync function takes and how long the async function takes. So we have, in the last couple of months, we have worked with a skywalking community and user can simply add the tracing like this to the function annotation. You can see there here is the provider is skywalking and the OEP server and some tags. Of course, you can add this tracing configuration into a Kafka mic and that way, the tracing will be valid for all functions instead of all functions. So if you add the tracing configuration to the sync function and async function annotation, you can see a construct in this skywalking UI like this. And you can also see the tracing data here. Below here is the skywalking official site. There is a demonstration of the integration here. Here you can see the function score stack and the tracing data here. So this is the open function tracing. It's already adopted. Actually, one of the major telecom companies in China is using open function to build their fast platform now and UC is using open function to process our high-core data. There will be more details later. And Chunxiang is a local platform and they are using open function to implement the vlogging mechanism. And we have also contributors to writing the download function frameworks. They wrote mine. Next, we are currently going to support the Go function frameworks very well. And next, we are working on the other languages, function frameworks. The UC team is contributing to the Node.js and Python function framework. And we have also planned to adding the Java and download function framework. A few days ago, we have discussed with Dapper and Cookus community. The Dapper community will add the Dapper support to the Cookus environment. After that, we are going to support running Java functions in the Cookus environment. It will be much faster than the ordinary Java function. And also, we are going to support the sync function with KDAGP. Besides the skywalking, we are also planning to add open time which will add another tracing provider. And also, we are going to support the open function console. So this will work well in the future. So this is the load mine. Next, I have two demos. I have two demos here, and for the time reason, I only demonstrated the second demo. The first demo, I can explain it a little bit. Actually, you can use Flambit to forward the Kubernetes log to Kafka server. And we can define async functions to consume the log from the Kafka server. And whenever the async function finds the error, it will send an alert to the slack. So we can take a look at the function code and definition. So the logic is pretty simple. Whenever the 404 error is found in the demo product name space for port or price, whenever this three condition matches, the learn manager alert will be defined and will be sent to the notification manager. And let's see how the function is demo. This is how a function is defined here. You can see here is the build stuff. Here, using these settings, a user can build the function source code into a container images. And you can see the runtime is async. And also, the skill options from KDA can be the main replica or max replica in the cooldown area. So how the skill will behave like this. The most important is the KDA trigger, how the function is triggered. The function will watch for the last topic of this Kafka server. And whenever the consumer lag is over 20, it will increase the replica. Yes. You can see a function has input and output. The input is from the Kafka's receiver. And this is the deeper component defined here. You can see this is the topic we are watching for and the Kafka server and consumer group, et cetera. The output whenever the error is found in the function, it will send the output to the notification manager here. You can see it's just a URL. So this is the first demo. I will now do an actual demo. So let's go directly to the second demo. This is the tracing example standard request to a sync function tracing trigger async function. Let's take a look at the function code. You can see here, it has the same signature context and the payload retrieved the payload here. Whenever the payload is retrieved from the HP request, it will send the payload to the RB, a grading target. Let's take a look at the functions demo. This is a sync function. This is the build part, how the function is built. The scale options, how many marks there are bigger. The runtime is in detail. Because the input is a sync function, the input is from the HP. So it only defined the output. The output is a Kafka server similar to the previous one. The Kafka server is defined as a demo component here. And this is the Canadian stuff. So this is the first function. And the async function, you can take a look here. The Kafka input. Here is the async function code and the same signature here. You retrieve the message from Kafka and just print out the message. So this is the async function. People look at how the async function is defined. The build stuff and the runtime, the async. The key does the scale options here. The same. And trigger. It will watch for the simple, simple topic here from this Kafka server. And this one, async function only has input at this time. The input is also a demo component defined like this. And output is just the SE out. So it's needed to be defined here. So let's take a look. So already define the functions here. You can see here, this is the sync function. This is the async function. You can also take a look at the story. The story is also up and running. And you can watch for the... This is the async function. This is the sync function. They both scale to zero because there are no payload here. So let's trigger some payload on the sync function. You can see here, the sync function is starting now. The sync function is starting now. Let's go start and take a few... The async function always starts now. So we can take a look at it now. Async function is starting now and you can see the message finally. So this is the demo. Okay. Next, I'm handing over to Xiuming and Xiuming. It's yours. Thanks, Ben. Hi, everyone. My name is Xiuming Lu, an architect from UC, and I'm so happy to share a talk in Kubico. The topic I share today is the power of normal driving with cloud-native serverless technologies. Next, thanks. In recent years, the autonomous driving field has been growing rapidly and attracting a lot of attention. Cloud-native came to maturity and prosperity in the same period. It is natural for these two fields to join forces and explore a route for cloud-native to empower autonomous driving. I would like to start my presentation by giving a brief introduction to autonomous driving. Here is a simplified demonstration of autonomous driving interaction where users apply for a car from the app, take a car to the destination, and finally get off. In this classic and simple process, from the user's point of view, it involves vehicle monitoring and commands dispatching within authority framework. From the perspective of the vehicle, it involves a series of AI technologies, such as environmental perception, pedestrian avoidance, youth planning, chasing control, multi-vehicle creation, and so on. We need to ensure high availability, error traceability, scalability, security for our services and media. Next, we will start from the specific perspective of cloud agnostic fast to explain why autonomous driving needs cloud agnostic fast platform. First of all, cloud agnostic, one of the core technologies of autonomous driving is artificial intelligence algorithms and the reliance on data for algorithm enhancement and improvement is so high that the data has a high value and the customers are often reluctant to put data in the public cloud and instead use their own set of private cloud clusters. In addition, the flow domain cloud providers and infrastructure choices bring challenges to the upper layer of development. A cloud agnostic platform can make development smoother and save significant costs in the face of different customer honored server environments and constrained vendors. The second fast an autonomous driving is essentially an extremely complex IoT device with numerous types of sensors and suppliers resulting the need for multiple passing scripts and extremely large number of processing modules handled by multiple development teams leading to the introduction of different libraries and even different programming languages. This post is of text where modules need to be merged and refactored at a later stage. In addition, the amount of autonomous driving data is far beyond the reach of traditional IoT devices and the streaming analysis of large amount of data requires a tool that has great scalability. Finally the rapid development of autonomous driving also means that the logic of data processing changes frequently based on rapid changing industry requirements. Different weather size and business models requires different processing logic and we expect to find a fast-bladed way to replace the computational logic at a small granularity instead of modifying one line of code and requesting time to produce a full CICD for the entire service which always takes the day. In conclusion, we expect to have an open-source fast-platform based on cloud-native technologies for us to quickly get the solution of the ground to have control over the platform itself and to enjoy the convenience of cloud-native technologies. At this point, Open Function came into our view and introduced Dapper and KIDA as part of its foundation into the consideration. Next, we will talk about why KIDA matters. Automatic scaling is an efficient education strategy for results often acting in scenarios where data traffic is erratic which is exactly what autonomous driving needs. Autonomous driving can also be understood as an AI driver who replaces or assists the human drivers to do the job and thus to some extent it retains the work habits of the human drivers resulting in a potentially significant peak and valley effect on traffic. In the dialogue on the bottom right, we can see we have arranged different work patterns for the driver's vehicle according to an 8-hour workday. In some scenarios, the driving taxi is only one part of the customer's business, so there's no need for the vehicle to work 24 hours every day. In other scenarios, the vehicle needs to work 24-7 hours a day to maximize its capability. This brings significant traffic differences. Autonomous driving sensors tend to maintain a high frame rate of data appreciation for safety and data-like sensors generate a large number of data, so a small change in the number of vehicles can also bring out a significant swing in data traffic. Next, we will talk about why Dapper matters. We have already mentioned some of the cross- language requirements due to the many data types and processing modules. As seen in the sketch in the lower right, we wear the data passes through many modules in the car-to-cloud link, and as we talk about the next, Dapper is also very attractive for decoupling. On one hand the choice of mid-west is often limited by the customers, and on the other hand with microservice architecture, the code of establishing connections with mid-wares handling errors adapting to different patterns is written over and over again. Dapper's binding can solve this problem by obstructing into a unified specification. Shadow devices are an obstruction of physical devices in the cloud, and the code often uses shadow instances to complete the monitoring and control of the actual devices. And generally uses an synchronized architecture to handle side effects for performance reasons, and Dapper's PubSub record can also cover these usage scenarios. Next, here is an example of Next, please. Here is an example of using OpenFunction to solve a real-world problem in the autonomous driving field, archiving data with OpenFunction in a customers' private cloud environment. The functions in the diagram are all service functions that we have written and deployed in cloud-native fast platform like demo shows. In this workflow, the ChromeDrop or HTTP request are received as inputs and then passed to the first function, which splits the task and delivers sub-task to the message broker for driving other functions, which are responsible for handling subtext written in different languages. Those functions receive data as input in a byte array form provided by the fast platform. Receive data, including context, info and sub-task description, and then push a zip package to the S3 service after data collection, data masking, and data compression. And finally, exit it. No need to pay attention to the middleware connection, no need to pay attention to the language. It is convenient for each function to use the most efficient OpenSouth library and let programmers finish their job at the minimum level. What is more when data traffic increases, KIDA automatically scales up function instances based on the amount of data in the broker backlog, allowing developers to focus entirely on logic implementation. What is expected from a cloud agnostic fast platform we would like to have the following features. First, tracing and logging and metrics. New data needs to go through hundreds of modules and a dozen network communications before it can be archived in the cloud and errors may occur in any step. Troubleshooting is one of the most important problems in the development of an autonomous driving system. We need to create a unified tracing mechanism within a complex call chain because efficient observability in mechanism saves time and money. Robust system efficient error handling are important aspects of getting autonomous driving on the ground for the safety requirements. The second, plug-in mechanism and package management. This is an important feature to make the platform more powerful and easier to use. The third, alternative on the line technology and easy to use interface wrapper. The former ensures that no upper level development will be nullified due to dependent technology replacement while the latter ensures the efficiency of upper level development. Last but not the least, active and efficient maintenance team. Currently the cloud native service platform is in its early stages and needs a stable and effective maintenance team like open function to make the community more prosperous and to make better open South products. That concludes my presentation. Thank you for the time. Thanks to you. I think that's all our sharing. So if you have any questions, you can ask us online.