 Hello everyone, thank you for coming to our session. So this session is entitled Connected Vehicle Platform, Kubernetes and Vehicle Service Mesh. So let's get started. So as end of this session, at first we will introduce our company and our team. Secondly, we will explain what we are developing and why we are using Kubernetes. Then we will explain our technical details. So let me introduce myself. So I'm Seh Kozumi, project manager of this Misaki project. So I joined Denso Corporation in 2017. So I'm developing edge and cloud integrated platform to bring cloud native ecosystem to the vehicle application. Okay, so Amasa, could you introduce yourself? Okay, so hi everyone. My name is Aman Gupta and I'm a software engineer at Denso Corporation. I joined Denso in October 2019 and this is my first full-time job. And I am also developing Misaki project in Kozumi-san's team. Kozumi-san, please continue. Okay, so I'll just explain the background of our project. So at first Denso is an automotive parts manufacturer. So we are providing the multiple type of their products and the system for global vehicle manufacturer. So we are taking the second global market share in automotive parts industry. So Denso is trying to be a service enabler by utilizing the cloud technology and creating additional values. For example, their remote maintenance, fleet management and auto mass driving and the traffic cone prediction. To be a service enabler, we are doing the three sort of activities. So first one is to hit on the new mobility subs idea. We do our design thinking. Second, to develop the prototype quickly and launch it easily. We are using the public cloud and OSS. And third option to create user values with customers, we do agile develops. So these three approaches are the same as the Silicon Valley approach. Especially our team focus on the vehicle connected platform. So Denso is providing the hardware parts like ECU, the sensors, air conditioner. But not only that, by utilizing the H computing and also cloud computing, we are providing the additional functions such as microservices and also cloud sites of the microservice module. So based on that mobility service providers can create a new service easily and quickly. So this is our team member. So we used to work in the development room, but recently we are work from home in this COVID-19 situation. So actually there are some barriers to develop the vehicle with applications. So it's not the same as a cloud application development. So let me explain what's the barriers. First barrier is a software. So this sort of the emitter software is running on the vehicle ECUs, the electric control units. So around 25 up to the 70 ECUs are running inside a vehicle. So small ECUs only have no operation system. And the middle size of the ECUs use a real-time operation system. So Redux is not our standard yet. And the computing resources of this sort of the ECUs are quite limited as there's no redundancy of the computer resources for their additional applications so far. And emulating the ECU environment on the development PC is really difficult. It requires some special hardware and the same size of the emulator. And keep updating the new software to the vehicle is not easy. So these sort of the problems are the first barrier of the software development for vehicle. And the second barrier is unstable connection. Vehicles are sort of their distributed system and they keep moving. So sometimes there is a no signal and the network is disconnected. And sometimes the network width is not good enough. So uploading or downloading speed is quite slow. So developers have to think about the sort of the network issues. For example, they're storing a data locally and send it a data again. And if it fails, it's rolled it back and send it again. The sort of the processing is quite troublesome. So this is the second barrier. So these two barriers is obstacle to develop the vehicle applications. But the motivation is developing the vehicle application is getting bigger and bigger. For example, the market size of the connected vehicle and application is getting bigger. And the number of the user is rapidly increasing. And also the vehicle itself now shifting from the feature phone style to the smartphone style much more flexibility and more new computing resources. So new vehicle will have their more flexible OS like a Linux and the platform. So some new use cases are already appeared. For example, the anomaly detection or real-time driving support. So to develop the prototype of the smartphone like vehicle. So we develop the Kubernetes-based connected vehicle platform, Misaki. So in this platform, the no MDS case is required. And quite easy to deploy and uploading the vehicle application. And developers have no need to worry about network disconnection or network failures. It enhance the productivity for their vehicle application developers. From architecture perspective, ideal architecture is like this, the left side. So centralized computing resources and centralized management. It's quite easy to manage and control but actually there is some network delay and disconnection, it's not works. So our approach is the right side. So our approach is distributed computing resources like a small crowd is located inside the vehicle but the management function is centralized to the crowd. So it's simplified our management function. So this is the architecture overview. And then some will explain the details but briefly says that we are using the some open source on AWS like this sort of the digital digital trains or the go implementation. And also we are using Kubernetes but entire Kubernetes is a little bit heavy to running inside a vehicle. So we split into the master node and the worker node. So master node work in the cloud side and also the worker nodes work in the vehicle edge side. And we are interconnected, both sides are using VPN. So only if they disconnected the JS vehicle the JS worker nodes container keep running. So why Kubernetes? So the reason why we are selected Kubernetes is flexibility. So current ECUs are not good for the Kubernetes because it have not enough computing power for that but the integrated ECU much of a bigger ECU is getting popular. And another additional computing resources like a vehicle computer will be implemented in the vehicle. So inside the vehicle there is several the small computer resources is located. So Kubernetes is a really good approach to gathering the sort of the computer resources and making one single pool and running the containers. So the Kubernetes flexibility that can give the workload off-roading. So vehicle edge only have a limited to the computer resources. So sometimes the application is consuming a lot of the computer resources inside the vehicle. So in that case, the high priority applications keep running inside the vehicle and the low priority application is off-roading and migrated to the cloud. So the application itself is a keep running but the workload inside the vehicle is stable. And the other reason are using Kubernetes is subsmash. So Nisaki provides the common network functions such as a data queuing. So if there is no this sort of the network layer subsmash so application developers should think about the network disconnection or network is slowing down the sending of data again. But by preparing this sort of a common function the application developers just try to send the data. And this connected store data to this data queue. And after they regain the connection they're sending a data again. So at that time is there this application never realized the network disconnection. So it's simplified the development process. So we are so surprised by the presentation at the Cubicon North America last year. So US North Air Force is already tried to use the Kubernetes in the F-16 fighter jet. So using Kubernetes in vehicle is more easier than that. So these sort of things are using, are we are using the Kubernetes. Okay, so let's move on to the technical details and the demonstrations. Anas, go ahead. So thank you Koizumi-san for presenting your part and hi everyone, I'm back again now I will tell you about the technical details of our Misaki project. So let's start with the most basic thing what our Misaki project consists of. So we call it a vehicle cluster. So think of it as a cluster of an edge device and a cloud instance. And they are connected by a Kubernetes. So the master node lives in the cloud side and the worker node lives on the edge side and they both are connected via VPN. So in the real world scenario, we can have one cloud instance for a car or a cloud instance can be shared among a few cars. So that's the scenario we are thinking of and we are developing for that. Now our Misaki has two different components. One is Misaki orchestrator and one is Misaki service mesh. So I will go through both of them and show you the demo scenario for each of practical real-life scenario. So let's start with the Misaki orchestrator first. This is the architecture for Misaki orchestrator and it mainly consists of four different components. You can see from the left-hand side, most right-hand side, we have a UI and on the most left-hand side, we have the vehicle cluster. So it's like an end-to-end connected vehicle platform. So I'll explain all of these four components now. So let's start with the UI first. So we call it Misaki UI. Misaki UI is implemented in Next.js. It's a Vue.js framework and think of it as a dashboard for administrator where he or she can see the list of vehicles and also can see the list of applications to deploy to this vehicles. And the applications we are talking about here are basically Hymn charts. This Hymn chart lives in a chart repository which can be hosted anywhere on GitHub, GitLab. And we are using something called chart museum which is hosted on our EKS cluster currently. So an administrator choose this Hymn chart and deploy this to a vehicle. And this request goes from this UI to the next component which is called Misaki API. So now let's move to Misaki API. Misaki API is a REST API implemented in Golang. And the main function of Misaki API is to render a Hymn chart into a useful Kubernetes manifest list. So a typical Hymn chart looks like this on the left-hand side. And the most important part of a Hymn chart is this values.yaml file. So on the right-hand side, you can see we have a manifest list which have a lot of resources like secrets, deployment, services. Now after we render this Hymn chart and convert it into a manifest, then we send it to a database. And for the database, we are using something called digital twin, also known as Dicto. And this is a open source database solution for IoT devices developed by Eclipse Foundation. The main MVP for this database is that the database clients can connect to it via a WebSocket connection. So for our case, we have two different components connected to digital twin. One is the Misaki API, which is sending the request. And the other is Misaki Kubernetes agent, which is present on the vehicle cluster. So Misaki API sends this rendered Kubernetes manifest and store it to a vehicle ID in digital twin. And once it is changed, our Kubernetes agent fetches it from the digital twin via a WebSocket. So that's the use of digital twin here. And now we go to the Misaki Kubernetes agent. So as you have seen, our Kubernetes agent fetched the Kubernetes manifest from Dicto. And the main job of Kubernetes agent is to apply this manifest to our vehicle cluster. So our vehicle cluster has two different components, right? One is the cloud instance and one is the edge device. So our application can be deployed either in edge device or cloud device or can be separated from cloud and edge device. So that's Misaki orchestrator, basically. And here is the overall overview. And you can see there is a dashboard to choose the applications. And we choose the application. It goes to the Misaki, which is basically Misaki API. And it gets the actual hand chart from the application repository. This application tells about the containers present in this scenario. And it can be deployed on both edge and cloud instance. So today I will show you a practical scenario, a real-life application which we have developed in-house. So the flow will be something like this. We will first delete an application. So the application we'll delete is called can-uploader for Prius. It has four different components. And after deleting this, I will show you the terminal in Jetson actually, Jetson JVR. And show you that pods are no longer available. Then I will install another application called via containers for Prius. So this application has eight different pods. And I will show the login screen of Jetson JVR again. And you will see eight different pods later. So let's move to the video now. So here is the Jetson JVR screen. And you can see on the top four pods are via containers or can containers. And we are going to delete them. So right now it's in the shutting down phase. And you can see it's in the terminating state right now in Jetson. And now it's no longer available. These changes are also reflect on the dashboard. You can see there are no containers available. Now we will install another services. And this is basically login to a chart repository. These are the following charts available. And we will install via containers for Prius. We can also input some additional components to this home chart. For our case, we will give the version and also the login details for that container repository because our Kubernetes needs to pull this Docker images. Right? So we have done that. And now it's in the provisioning state. Let's wait for a moment. And now it's in a working state. Let's check the Jetson JVR. So you can see there are eight different pods now on the top. And this change will also reflect on our dashboard. You can see on the edge side we have eight different pods running. So this green dot means it's in the running state. So our via containers are actually responsible for sending files to S3 bucket every 30 seconds. And I will show you that it will just send another file now. So let me refresh. And you can see there are two different files now. So our containers are working fine. So this is like end-to-end process from deploying to the deployment and actually getting the data to the S3 bucket. Okay, so let's move to the second component now. So our second component is Misaki Service Mesh. Let me give you a brief overview again. So why are we using Service Mesh? So Service Mesh take up us network concerns from the applications and application developers no longer needs to implement distributed system practices like timeouts, system discovery, et cetera. And application developers can focus on business logics and values rather than worrying about these networks. And how are we doing this? We are using Envoy as a sidecar proxy. And we are also using something called Misaki Control Pane which manages the policies for this Envoy. So the architecture looks something like this. We have a vehicle cluster here and on the top side we have a cloud mode which is running an application app C and on the bottom side we have our edge node. It is running two applications, A and FB and all these applications are running Envoy as a sidecar proxy. So all the request coming, outgoing or incoming are passing through Envoy and Envoy manages this request where to go. So to manage policies for this Envoy, this control pane is a centralized unit present in this vehicle cluster. We also have an additional component called Q. So Q basically stores the data when there is no network available. So it acts as a proxy server. It stores HTTP request streaming data from application when there's a network disconnection. It resents the HTTP request and this streaming data to server whenever the network is connected. This is the scenario where we have the network connection available and we will send an MP4 file to S3 bucket and it will be sent to the S3 bucket directly. Now, considering a case of disconnection, so our application is trying to send this file to S3 bucket but it will be redirected to the queue and once we regain the network it will be automatically sent to the S3 bucket. So that's the functioning of Q and service mesh. Now I will show you a realized scenario of an application running in Jetson JVR and also this Jetson JVR is present in a vehicle, actual vehicle, Toyota Prius. So let's start. This is a S3 bucket and it's currently empty and we will try to send a dot some file. So this is the Jetson JVR screen and on the left-hand side terminal you can see the logs of service mesh and on the right-hand side, right top side you will see my colleague will make a request to send the file to S3 bucket and on the bottom left or bottom right you see the logs of a replay component which will send the data once network connection is available. So right now connection is already available and we just see a normal response and our file should actually be sent properly to S3 bucket but soon my colleague will disconnect the LTE dongle from Jetson JVR. So how we disconnect as we put LTE dongle in a Faraday bag temporarily this will block the signal and now my colleague has made a request to send this dot some file again to the bucket and now you will see the response will be something different. Please wait a moment. You can see there is a temporary redirect message. So that means our request has been stored in the queue and now my colleague will take the dongle out and our network will be reconnected. So once our network is available our queue should do its job and you should see the changes on the bottom right terminal our replay will send the file to the S3 bucket. So here you can see the file is being sent and we got a response. So it should be there in the S3 bucket. Yeah, and we have the file. So that's it for the service mesh demo scenario and now I'll quickly summarize or quote our project. So we developed a prototype of Kubernetes-based connected vehicle platform and Kubernetes helped us a lot to develop and deploy applications on vehicle but there are still a lot of challenges that needs to be faced. The challenges we are currently facing are how to update Kubernetes worker nodes in each vehicles and how to manage many Kubernetes master nodes or how should we think about one Kubernetes master and many edge vehicle architecture. We also have to try some lightweight Kubernetes like K3S and Qwitch to minimize the required CPU requirements or memory sources on the edge device. Our journey is still just a beginning and there are a lot more things to come. And you can visit us at our GitHub page named Misaki-IEO. We are planning to release some technical documentation later in future. So please stay tuned and thank you everyone for attending our session. Yeah, thank you for joining.