 Hello everyone. Thank you for joining the session at Computing for Connected Cars, Use Case, Requirement and Architecture. My name is Toru Furusawa. I'm a researcher working at Toyota. I mainly research on edge computing technology for connected vehicles. In this presentation, I will introduce connected cars and their use cases and requirements first. Then, I will talk about the expectations and challenges for edge computing for connected cars. Lastly, I will show some of our approaches and internal POC demonstration experiments for the emerging challenges. First, I will talk about future connected cars and their use cases and requirements. Connected cars are a time for vehicles that are connected to a mobile network. Many cars are already connected to the network and all cars sold in the future will essentially be connected to the network. At least for Toyota cars. We expect that around 2025, 100 million cars worldwide will be connected to the network and the total traffic that will be generated will range from 1x5 to 10x5 power plants. This is a huge amount of data and concentrating all the data in the crowd will cause heavy load on the mobile network and the crowd. So, a system to distribute heavy load is needed. So, what kind of services do connected cars provide? They can be divided into three main service types. The first is in-vehicle infotainment IVI. IVI refers to vehicle systems that combine entertainment and information delivery for drivers and passengers, such as GPS notification, music and TV, voice applications, etc. The second is the intelligent transport system, ITS. That includes driver control services such as local danger warning, collision avoidance and cooperative adaptive cruise control, etc. The third one is vehicle IoT or vehicle big data. Lots of data, including high definition map data, video data, and the body of a vehicle sensor data, etc. are collected to the crowd. In fact, we believe that the third one is going to be the most important in the future. So, why is vehicle big data so important? As drawn on this slide, data collected from vehicles into the crowd can be used for a variety of services. For example, it is expected to be applied to a variety of automotive services such as sales marketing and maintenance services, in addition to research and development and enhanced vehicle features. It is also expected to be used in conjunction with other companies' public crowds and public data for extended services such as mass, music and video services, and possibly auto insurance services. Now, let's consider the requirements of connected car services. I mentioned earlier that there are three types of services. Each service has different service requirements. For IVI, the key requirement is to improve the user experience. The communication will involve a lot of downloads from the crowd. Therefore, broadband cellular communication is suitable for IVI. In ITS, safety is the most important factor, and low latency and high variable vehicle-to-vehicle communication is required. For this reason, dedicated short-range communication DSRC and cellular V2X communications are suitable. As for IoT, capacity is the most important factor. Although the demand for latency is low, the data approach to the crowd is required. So, wireless LAN and cellular communications are used for IoT. Each type of service has different service requirements, but the big data requirements for IoT services are so large that they can negatively impact services like ITS. Which requires small capacity but low latency. We believe this will be a major challenge. Let me give you an example of the IoT big data capacity issue. At the beginning of this talk, I mentioned that 100 million cars are expected to be connected to the network by 2025. Assuming that 3 million of these cars are Twitter cars, and each car uploads 20 gigabytes of data per month to the crowd, that means 60 petabytes of data transactions per month occurs in the crowd. So, for example, if the transaction rate in the crowd is 10 gigabytes per second, this will take more than two months for completion. So, we need a kind of efficient way to process the big data. To process this kind of big data efficiently, a load distribution will be the right solution approach. So, what exactly is the right solution? There are several possible approaches to distribute the load. The first is to collect only the data that is really needed. For example, if a car is parked in a parking lot, the drive recorder video won't need to be uploaded to the crowd. The second is to use edge computing to distribute processing at the edge. I will discuss this in more detail later. The third is to send data at times when the network is not congested. For example, a car can avoid congestion on the mobile network by sending data during late midnight hours. There are many possible approaches, but in this session, I will focus on the second one, edge computing. As you may know, there are many different types of edges ranging from edges close to the device to edges that are located in the telecom carrier's infrastructure, which is MEC, and to edges that are provided as just one zone in the public crowd. So, which edge is best for connected cars? In my opinion, I think we'll see adoption starting at edge closer to the crowd side. This is because the demand for big data communications will fluctuate and will require a certain amount of scalability at the edge as well. However, this is still being discussed and there is no definitive answer yet. To enable edge computing for connected vehicles, we founded Automotive Edge Computing Consortium, AECC, in 2018. AECC includes a variety of companies from automotive sector, telecom carriers, cloud providers, service providers, etc. As of September 2020, we have 26 member companies. AECC defines specific service scenarios, use cases, and requirements for edge computing for connected vehicles and comprises recommended architectures and solutions based on these requirements in technical reports. Actually, I'm one of the working group members of the AECC now, and I contribute to defining some specific use cases and requirements in AECC. By the way, most of the materials such as white papers and technical reports in the AECC are available on the AECC website, so please take a look at them if you get interested. Next, I would like to discuss the concept of edge computing at AECC. In a nutshell, it is distributed computing on localized network. By distributing computing at the edge, it is expected to reduce processing time. Also, the big data traffic is spread to the edge via the localized network, and data between vehicles in the same region is consolidated and efficiently delivered via the edge. Based on this basic concept, the AECC defines specific use cases, requirements, and architecture. But today, I want to go into the details here. Although AECC has defined a variety of use cases, requirements, and architectures, there are still many challenges or many issues to be resolved in order to realize the automotive edge computing concept. First, there are lots of challenges related to the edge infrastructure. What is the best method to split some traffic to edge servers in a mobile network? How should the infrastructure of the cloud environment as edge be implemented, etc.? Specifications and implementations of such edge infrastructure seem to be an area of particular strong interest to telecom operators. There are many standards and open source implementations. For example, the Accraino project under the AECC Foundation Edge has developed many blueprints that implement various mechs. Also, there are a number of challenges that remain as to how to deploy and use edge applications. For example, how should we design distributed applications? How do we deploy applications to the multiple edges around the wall? The edge has limited resources compared to the cloud, but how do we manage edge resources, etc.? And it seems that there are still few specifications or implementations for these edge applications. We are working on several activities to solve edge application issues, as well as the edge infrastructure issues. Let's think again about the challenges of edge cloud applications. Edge clouds have limited availability and scalability compared to public clouds. On the other hand, communication demand from connected cars and edge application processing demand is expected to change drastically with time and location. For example, if there is a traffic accident and sudden traffic congestion, the communication demand for nearby edge servers may also increase dramatically. Such sudden fluctuations in demand can cause significant service delays or maybe outages, which can be a major challenge. And there are many possible solutions to this challenge with a variety of approaches. However, in this session, I will present two approaches as examples, as well as simple implementation and test results to check the feasibility. Please note that I will not go into the detailed techniques. The first is a method to scale up to nearby edges, and the second is about a method to control the upload of vehicle data. The first approach is the idea of assembling clusters of nearby edge clouds in the same region. For example, if the number of vehicles increases and a particular edge site is suddenly overloaded, it will first scale out in response to the load within the edge site. However, if the edge site runs out of resources and cannot scale out anymore, it can continue to temporarily scale out to the servers at nearby edge sites to prevent service updates. This is about the idea. And to confirm the feasibility of this idea, we performed just a small POC demonstration, and the implementation using Kubernetes and K-native to build and test the operation of a serverless edge computing infrastructure that runs across multiple nearby edge sites. The example workflow is as follows. First, connected cars access the closest edge site application via the mobile network. The number of control pods running the application is scaled up down based on the frequency of access. If more frequent application accesses occur after being scaled out to the limits within the closest edge, then the application pods are deployed and run on the adjacent edge sites to distribute the load to the neighbor edge. And we confirmed that the simple experiment works as expected. However, there are still many issues to consider, such as how to select the adjacent edge when there are multiple candidates, and whether it works properly in a large-scale environment. But I'll skip and detail them in this presentation. The second approach is to control the amount of data from cars to avoid congestion at the edge. The idea is to measure the number of cars connecting to the edge as a metric for the real-time load of the edge, and then adjust the size of the data sent from the individual cars depending on the number of connections so that the total traffic coming into the edge is not congested. For example, if the number of cars is low, as shown on the left side, the frequency of data transmission and video resolution will be high. But if the number of cars is high, as shown on the right side, the frequency of data transmission and video resolution will be low to prevent congestion of mobile network. By the way, in practice, we need to control it according to the specific application and its use. So please note that this is just an example of a simple method. So to check the feasibility of this idea, we built an experiment with a simple POC test environment. We wanted to simulate a mobile network, so we built a small LED emulation environment using Comac, which is an open source project of ONF, Open Networking Foundation. We also deployed to edge applications or make applications at the location where the SP Gateway would be connected, as shown in this figure. One is the application to measure the load and control the data upload of each vehicle, and there is just a sample application to recognize the video sent from the vehicle. The experimental scenario is as follows. First, the data upload control application measures the number of vehicles connected to the underlying mobile network. Then, based on the number of connections, a request is made to each vehicle to adjust the transmission frequency and video resolution to change the upload size. And to meet the upload size needed by the request, each vehicle sends its video data to the video recognition app, applications, and this cycle will continue at regular intervals. Finally, here is the result of the experiment. We measured the traffic of vehicle data coming into the video recognition application. When the number of vehicles was increased by one car every 40 seconds, the orange line represents the number of vehicles and the green represents total traffic. And it was just confirmed that the total traffic did not increase linearly as the number of vehicles increased, but remains roughly constant. Having presented two approaches and examples of simple POC demo implementation, let me add just one comment about this. Traditionally, such experiments have required the cooperation of telecom carriers and telecom equipment vendors, and sometimes with paid contracts. However, this time, these POC implementation and experiments could be done very quickly and easily using open-source software by ourselves. The open-source of the telecom network technology makes it possible for end-users like us to implement and test it ourselves. So, we are very confident that this will lead to a new wave of innovation in telecom network technology. Finally, I will summarize the presentation. In this session, I introduced the use cases and the challenges of connected cars and outlined the promise of edge computing as an approach to solve those challenges. And I showed the demanding challenges for practical implementation. And finally, I presented several of our approaches to solving those challenges and some implementation. That's all for my presentation. Thank you.