 So, first of all, thanks for joining in for this wonderful event and I hope everybody is safe in this pandemic time and at home and working your, you know, taking care of your daily activities working across from home itself. Stay safe. Okay, so just to start with the session today. As part of this event I will be covering a small presentation on how we can use federated learning in a connected car scenario and how we can use that in an era where 5G is bringing a lot more communication at my devices. So, the whole idea. So before I get into the agenda, let me just introduce myself. My name is Anurag Agarwal and I am currently working as solution architect with Tata consultancy services in India. And I am taking care of multiple initiatives in my center of excellence with respect to 5G and how we can use those 5G technology in different industry segments, be smart manufacturing, be it vehicles, be it automobiles and different aspects. So, but my out of my personal interest automotive is one of the areas which I keep focusing on. And we have also, you know, developed multiple reference use case. I'll not say industry level use cases, but reference use case to showcase a capability of AI and ML and how that can be leveraged in industry. So, as part of the today's discussion, we will be briefly touching upon what is 5G and how what is 5G V2X connectivity and how does it help you connect vehicles to the network and what kind of messages they can exchange. And we will be talking about autonomous vehicles and connectivity requirements, what are they imposing on the network. Then we will be briefly touching about the credited learning, the challenges, some open source frameworks. And then I will be coming to a small architecture where we will, we see that how 5G along with my AI and edge, how will that will fit into one by network going ahead. And how will these technologies will be leveraging each other across to create an end to end use case, more intelligent, more autonomous use case. That's something I will try to highlight and touch upon that. And at the end, I will just briefly explain one of my use case that we have implemented in my lab. And then I will be open for question and answer. But feel free to drop any questions in the chat box, question and Q&A chat box, I will take up as and when I can take them up. So quickly starting to it. So how do we typically see, you know, automobile, the back like two decades back until two decades back, when we talk about automotive is a standard, you know, geared or non geared cars with basic functionalities and some certain automation features like cruise controls, or, you know, certain certain controls within within the driver's seat. Then the error came off the Bluetooth Wi-Fi 3G and 4G started getting some sensors on my devices, like GPS onboard sensor, there's a which can start at helping my driver's assistance. So basically what happened is it started giving you a certain level of automation with conditional autonomous when I say conditional autonomous, which means it needs certain driver assistance along with certain functionality being off loaded to the device. In this case, when I say device, in this case, I'm talking about the automobile. So it can be a certain activities are being performed by vehicles, but under the guidance of the driver. Now, then the era is what I call it today's era, where we are leveraging 4G and 5G to to create a next generation vehicles autonomous autonomous vehicles are the talk of the town. Everybody's talking about that. There are a lot of companies, all big names are working on it. A couple of them has done their test drives. A few of them has also launched on road, but unfortunately it's not food proof yet because there's a long way to go. And what is making that happen is the compute, the low cost of compute. When I say low cost of compute means I have, the compute has become so cheap that I can bring that onboard on my device itself. I can not only just rely on the data from my sensors, which is there in my vehicle, but I can also connect to the network and get some relevant information and then take a decision. So I have huge amount of onboard compute. I have hundreds of sensors on my vehicle. Then I have cameras, high precision cameras, which can be my third eye, driver's third eye and eventually that will be the only eye of the vehicle. So the idea is, can I, eventually you'll see a lot of processing, your vehicle is known as a computer eventually. So now, okay, now, this is what we have, we are in research in today and this will be further evolve, it will further evolve as the technology evolve as we achieve more and more optimization. So this will eventually evolve. But what is another key factor here is to make it happen is other, along with my onboard sensors and onboard compute is the communication. So what 5G is bringing you, what 5G is typically promises is a very high bandwidth, ultra low latency and very high reliability. So with this promises, it becomes very evident that these technologies, specifically with 5G, that can be leveraged in vehicles. As part of this endeavor, what the standard bodies has started doing is, they have specifically started customizing my 5G stack to support vehicles. So for that, what they've done is they have started just starting multiple use cases. So if my two vehicles have to communicate with each other, how will they do that? If I have my vehicle has to communicate to a pedestrian walking on the road or cyclist who's going on a cycle or to infrastructure, what kind of a different changes would be required? At the same time, it is also trying to not just connect between vehicles, it is also trying to look at if my vehicle has to connect to a network and gets some information from a cloud in servers or somewhere around that. What would it take to enhance my stack in such a way that I should be able to support that along with my regular EMBB traffic? So with this, now they have already defined that, so they started with D2D technology way back, like a couple of years back with their further enhance it to make it specifically for vehicles. Now what happened is, this vehicle has evolved in release 14, which is a previous release and release 15, which is a 5G release. Now it has come up to a stage where it can be adopted by different industry, vehicle industry segments now. The kind of use that we have already started seeing, I should not say we will because a couple of them has already been experimented along, they have been tried on, they have a lot of trials going around the world. So one of them very interesting, which I find myself is intelligent traffic routing, where if I can manage my traffic considering that all the vehicles are either connected or autonomous, my vehicle I should be able to route my traffic in such a way that I have minimal traffic jams, I can load my carbon fuel footprints, I can ensure that my emergency vehicles are on reach the destination on time. So when I come to emergency vehicle that is one of the use case which I will probably be showcasing towards the end of this presentation. The second kind of use case which are very prominent, I briefly touched upon was autonomous vehicle with onboard intelligence, my vehicle can think of itself. It has its own brain now and then the real time infotainment where you have, you can have a live TV in the vehicle, all the entertainment kind of scenarios where you don't really need to use your mobile, but you have that kind of connectivity in building your vehicle. So that we made the communication connectivity a low latency and high reliability communication connectivity a very important factor for this industry to evolve. Now, when we talk about autonomous or autonomy in vehicles, we briefly spoke about what are the different use cases. So at the same time, what does it bring in, what kind of expectation does it have from the network? Though 3GPP and other standards bodies are working towards to make it happen and try to bring in that kind of or achieve the requirements what they are imposing on us. But if you look at this just to give that kind of a feel, I just put together a few numbers which can help us understand that kind of requirement that they are looking at. For example, for a use case of vehicle polluting, now they need, so when we talk about vehicle polluting, platooning, sorry platooning, my bad. So what they do is that vehicle one is moving in the front and vehicle two is moving the path following the vehicle one. So in this case, the latency is very important because as soon as this guy breaks, this should be able to break immediately without hitting the vehicle. So they're looking at very low latency of 10 milliseconds or maybe less than that. Reliability has to be minimum of nine-fourths and data rate has to be very high because they will be extending huge amount of data to ensure that they are not going to collide in under any circumstances. Similarly, in advanced driving scenarios, you have much lower latency required. You have in remote driving, similarly the remote driving says somebody is driving from some remote location to making much more reliability and much higher reliability which should be around five, six or sorry five-nines or six-nines. Similarly, but again, sorry, unlike other use cases where downlink data is key, in this case in remote driving, uplink data is another key requirement. When I say uplink data, which means the data being sent by the vehicle towards the network is also very important. Typically what happens is downlink is higher and uplink is slower. But in this case, what they expect is uplink has to be higher because along with getting instruction, it is also sending huge amount of data towards the server so that the remote assistance or remote person setting in the remote side should be able to get all the sensitive data in real time and should be able to act on it. Then certain other use cases like public safety, city infrastructure and connected vehicles are also imposing similar kind of requirements on the network. Now, I'm sure couple of you must have seen some kind of an architecture for onboard connectivity in an autonomous vehicle where you will be having a high performance compute with CPU and RAMs along with my GPUs, HPTS and DSPs. Because that is one of, you know, conventionally also they have been the faster processing, faster processors than CPUs and RAM. And on top of that, then you will be having your some operating system. In this case, what we had in our lab is autogrid linux which we used to create our own small proof of concept for that. Then you will have some edge platform. Now, I'll come to that what edge platform, when I say edge platform, what does it mean here? But before going to that, let me also touch upon all these input sensors. Typically what you have, you have multiple input sensors, we talked about that. And then you have a collection engine which collect this sensor data at a predefined frequency. And then it will send to the telematics engine, analytics engine and then based on certain decision it takes based on my rule engine and my certain decision. It will go and take or assist my driver or my autonomous car to perform multiple actions. Now, this is something which will run inside the vehicle. But it's same time to take this decision intelligently, more precise decision. It needs certain data from outside world. So for that we have seen that 5G model or 4G model for 5G connectivity has started getting a space in this on entire architecture. Now, coming back to what device edge platform here means. This device edge platform means I should be able to write certain applications, not just one time one. It helps you make a flexible architecture where I have the flexibility of deploying multiple more applications on board. And take care of data plane movement rather than a fixed implementation I have more flexible implementation. So this gives you a certain flexibility of taking care of life of life cycle management of this application. Be it remotely or be it locally that depends on the kind of compute I have on this platform. So now this is the overall architecture but now another key aspect which I want to highlight here is again I should say a highlight here is the 5G connectivity. Now, what does it mean? What does it do for you? It brings in the direct or indirect communication within network. It helps you exchange my sensor data towards the network or even receive those sensor data from nearby vehicles or from the network. Sorry. So now let us typically, no let us touch upon the intelligence building block. So let me just go back to the previous screen for a moment. So in this screen that you see that we talked about this intelligence applications, analytics engine and telematics engine. So what does it, I will try to expand that in little more. So in this case what we are saying okay what we typically have is you sense the some data, then you identify what kind of an object, what are the different attributes from the sensors. And then you create a perception. Okay. You create a model. For example, if I have to track an object or in the case of a vehicle, I am supposed to follow an object in front of me. So it recognizes object, it starts tracking it, it creates a model and ensure that what are the different parameters I need to ensure that I am in certain distance to them. Am I getting all the parameters of them? Then it creates, based on this perception it takes a certain decision. It defines its path, it defines its location and speed and everything it can take control of that. Now as we saw in the previous slide, I spoke about the edge compute platform. Now this is what it brings in for you. It gives you surrounding awareness based on intelligent sensors, object classification, hardware and hardware compute. But at the same time I see them as challenges as well. When I say challenges because especially if you look from an automotive industry, the more precise or precision you want in this case, you have to have a high compute blocks on your vehicles. Which means the cost will go high eventually and you need more power too. So your power requirements goes high, your cost goes high and then your maintenance goes high because then you have to ensure there is software upgrades, this risk of security. So to take care of those things we have to ensure that we have a balance between them. Similarly with network connectivity and edge compute, you do get a network awareness, you do get contacts away decision making capability and you do get high reliability network. But at the same time going back to my previous thing is the security is one of the biggest challenge which causes a huge risk in achieving a full autonomy. Typically what happens is you have certain, with AI ML what you have is you have certain business rules which you apply on your devices. It sends the training data sets onto my storage server, onto the cloud which has a storage server, my algorithm running and it optimizes your models and weights and sends you back. But what we typically see is who is doing the entire heavy weight lifting is my communication channel and my compute on the cloud but with reduced cost. So with reduced cost I can technically shift some of these heavy weights onto the devices. So for that I think federated learning is one of the key concepts. It's again no different than AI ML but just a different way of implementing my ML algorithms. So it talks about actually how you train your algorithms on a decentralized architecture. So we are going to be talking about I have a centralized server which can do all my heavy lifting part but at the same time when I have enough compute. So can I offer part of that on my device rather than doing all the computation here and then sending the model back to them. So the idea is offload part of it here on these devices so that they can take care of the compute of their own part and then sense the only the new optimized model on the cloud. So that which which it can collate and then sends back to everyone. So now everybody has a global data global model but what they are working on is only the computation for the local data. So let me say we saw this slide where we say OK all this country. So what we saw is in the previous slide this entire heavy weight lifting has been done by that. What I explain this thing is OK I have you know offload some of the compute on the device itself so that I'm not sending the entire training data set on the server. I can reduce the load on my communication channel but it helps with reduce the overall bandwidth requirement of my communication channel. And at the same time it will ask each device devices to do some kind of heavy lifting part of it which will be much smaller from the device perspective. So overall sense I'm still running a similar kind of an architecture but I have now offloaded some part of it here. And and without without compromising on their you know the model or the performance of the algorithms or the optimization part of it without impacting that. But physical learning comes with certain challenges. Now as we see in the IoT world or in an era where all my devices are getting connected the number of devices will be huge enormous. We are talking about multi-millions and multi-billions of devices. So with that what does it mean even with this approach there will be huge amount of data on the network. So we have to ensure that the balance between my communication the network sorry the balance between the kind of data that we are transmitting from my devices or the models or how do we design that that has to be taken care. Even with that kind of a bandwidth which 5G is going to provide us that might not be enough in future. Again the privacy concern is one of the biggest concern because though it helps you you know take you know store the data locally but at the same time you know you have to ensure that your model is secured. You should not somebody who is fishing out in the network should not be able to you know fish out here. So the network security is at most important. Right so another important factor is the network or system heterogeneity. So like I said now we are going to have huge number of devices but those devices will be of different you know make different purpose they might not be all of them might not be a vehicle. So they may have I can have an auto home automation I can have a factory automation or a street light but on a street light you can't have you know huge compute there. Similarly you will have you know few devices which can support huge compute because they have that kind of a power support they have that kind of a space to add that kind of compute there. But there are certain devices which cannot. So system heterogeneity is not I cannot say that my federated learning algorithm will work in all the conditions. And another very important challenge which we see is the statistical heterogeneity. Now since all those devices which will be generating this data will not be the same format they will not be in a similar kind of an architecture. So how do I take care of that part if I have to create an algorithm if I have to you know distribute my heavy lifting part of my different devices. How do I get that all of them should be able to send a similar kind of a model so that my server or my centralized unit should be able to you know collect them all create a generic global models and then back for that. Before I before I jump to that but let me quickly sketch up on what are the different open source of frameworks which are available right now this is just a very small list of it. There are a couple of more which are being used by different you know organization developers and but few of them which I have personally used and have touched upon them is the pytorch. TensorFlow and Keras so these are the ones which are being used as I have been reading very recently that pytorch is one of the most you know a favorite getting the favorite spot with automotive industry. They're trying to see if pytorch can be used. They find more useful in terms of you know designing the algorithms for automotive industry use case whereas TensorFlow BC has a huge amount of libraries which which can help you to quickly develop your models and you know get your application or use case implementing. But yeah so coming back to my previous topic on how do I how do these heterogeneous devices or heterogeneous models how do I convert that into a single model. So now I can have multiple implementation of these devices and can have a different kind of implementation or different devices but there's a one month tool by again supported by open source community called when index open format for machine learning models. Which allows you which helps you convert any model into a standardized model so that beat whatever device beat I'm sending something on the cloud something on the device. I'm using whatever you know training frameworks here I should be able to use a similar model this is this I believe one would approach though it still needs I believe that still I think it is still evolving but but. This is one of the good good approach that will help us design future machine learning of the federal learning algorithms so that. So what it helps you with this is more of interoperability without without knowing who I'm going to work with it and it also at the same time gives you a huge amount of hardware exploration. Then as a huge month which means it supports CPU it supports GPU it supports FPGA I don't really have as a developer I really don't have to worry about that I can just convert my my model in into one of those supported model and then send it to my device or vice versa if I'm just saying something from sending something from my device and I can convert that into one of those and then send it out to my training framework. So that is I will be able to design my algorithm or implement my algorithm is much faster and without much hassle. Okay coming to how this overall system will look like or how we can leverage a federal learning kind of an algorithm or kind of an architecture with my 5G networks this is I have tried to put together different blocks. I can use too much of different blocks on a single screen at to too much to discuss here but let me just quickly touch upon I will try to touch upon different aspects of the entire network and how these things will work and how will they help you to design an overall system now as we briefly talked about the kind of devices will be there in the network right now. I have say this a vehicle then I so if I have a vehicle network these vehicles can talk to each other they will have an edge platform edge applications which which might be doing telematics engine or analytics engines and on top of that you have your application. And your ML model now they what they will do is they will just use a 5G connectivity okay but before I get into that but but first let me explain the architecture so typically what we have in a 5G network is you have your telco platform. These days everybody is talking about the cloud platform you will conventionally they have bare metal platform so now you eventually will see you have a cloud platform underlying your Cuban it is talkers open stack bare metal as a service also architecture and multiple hardware accelerations. So which will be creating a baseline for your hardware or platform and then on top of it you have the radio access network. Which now we have started seeing you know much much much more desegregation so that I can support multiple different kind of devices different kind of vendors to connect to and on top of that you have this edge platform. Now Orion is another alliance who's driving this initiative which ensures that you have a framework to define sorry framework to deploy your applications with accelerated data path. And then you have typical management system and OSSPS on top of it and then you have another key components coming into the picture is machine run operations and management. To help you deploy those certain applications here and then you have the orchestration engine the network function managers and infrastructure manager. So this overall creates the overall architecture of a network with 5G and how these devices would be connecting is through my 5G radio chat links. Now in this case my sale let's say I have this vehicle who has all the compute to run all the heavy lifting algorithms on its own. It can run that and just share the model. Now with the application which is running on my cloud now in this case what happens is I can design an application. I might have a requirement of an application which I don't want to deploy somewhere on the cloud but I want to have it on-prem. So for example in the factory automation right so in factory automation what is this they talk about I need something which is on-prem. So I can deploy my cloud component within my premises next to my radio so that I can reduce the overall latency. So if I need a use case where my latency is at most importance I can deploy it near to my manager. Now let's assume a scenario where I have multiple other sensors what they will do is excuse me sorry. So what they will do is rather than running this entire content on then they will just send the data and my ML application running here will take care of that. So the idea is how do I leverage what level of edge I should be leveraging to run my algorithm. How can I break that in this entire AI algorithm or ML algorithm in such a way that I can use the compute of my local device or my edge device or my far edge which is somewhere on the cloud I should be able to design. So moving on before I jump into a demo let me also take a minute to talk about on edge device. I think I briefly just spoke about it so when I say edge has multiple meanings it does not really just means the device edge or network edge. But how much I can break my network in such a way that I can reduce the latency of overall network and process my data closer to the network or closer to the device without choking my network bandwidth. So to showcase what we have experimented so far in my lab I will show you a demo. So idea is how do I take care of my emergency vehicles on a busy road without impacting my traffic and leveraging my federal learning look this kind of an architecture. So what I say okay let's assume there's an accident on the road it will immediately reach out to the server hospital server for asking for ambulance. So technically what I'm doing is I'm reaching out to the app server somewhere on the cloud it will go and talk to my master traffic controller. It will run some analytics basically it is doing the heavy lifting part there and it is finding the way for the ambulance to go from point A to point B in a minimal time. So what I'm doing is I'm right at this point of time I'm doing all the algorithm running in my cloud server itself. Then I will intimate my hospital to complete route to run from point A to point B hospital will notify the ambulance. Now here comes the important part this is where the ambulance is now ambulance knows I have to go from point A to point B. It will be not only running certain intelligence in itself but at the same time it will be talking to the network to get the current condition of the road. So that if it has to read out in real time it should be able to do that. But at the same time what this controller traffic controller would do is it will try to reach out to the app server for all the identified cells in the entire route say from point A let's assume sorry Microsoft. Yeah so from here to here now if I have one cell it will reach out to the cell saying okay this is ambulance coming your way based on your current traffic can you please ensure that my ambulance reaches there without any hundreds and should be able to take the fastest route. So what it does is I run some part of my you know analytics on my server to avoid everything pushing on the devices I find okay this is the current situation of my network I have so many so many vehicles on each junction. And then what would it take to you know move them around without impacting their routes so what will it do is it will get some policies from the top. Based on these policies it will go and ask the network to increase the bandwidth to take care of all the messages which will be you know suddenly they will be searching the message because there's some sort of emergency situation. So there will be a lot maybe there was suddenly all these devices will start communicating. Because now they are they have their identity based on their sensors they will be identifying okay there is some certain situation happened it will start reporting into the network server and it will start reporting into the devices all these devices at the infrastructure devices. So what it does is it talks to all infrastructure devices guide them to run some you know algorithm locally to identify the traffic on the junction and route them accordingly. So once it does it my ambulance reach from point A to point B without much hindrance it looks much easier on the screen but frankly speaking this algorithm we are not still able to you know make it perfect. We are still working on this you know algorithm but having said that this is more of an experimental algorithm not something that we are painting or something this is just to see how I can you know break down my processing into three different layers. I at the cloud at my edge at my devices what would it make sense what kind of processing would it make sense to run at what level. So before I go to QL let me take a few minutes and quickly want to show you one video of our simulation that we have done. So you can see on the screen a typical city scenario and I run trigger an event let's say I picked up a point. So as soon as I do that all this processing happens in real time and my ambulance is on my way. Now you can see if you have noticed let me stop the video for a minute. Now if you have noticed this all the lights has turned into green and the lights here on the left side are red to avoid any traffic on the path of the ambulance. So ambulance at this particular time it's not only just talking to infrastructure but if you see all these lines the vehicle lines this. The lines where my vehicles are talking to the network they are exchanging data lot of data and my ambulance is also exchanging data. So if you notice all these lines are nothing but just lot of communication happening between them. So this is the pick a point it picked up the assume that it has picked up the patient and it is on the way back to the hospital. This entire route from point A to point B and point B to point A it has ensured that the traffic is minimal or more traffic at all so that ambulance doesn't have to slow down at all. So there's a huge amount of processing going around in this context within multiple. So coming back to the slides I think this with this I would like to take up all the questions. Let me go back to the presentation and I will stop sharing. So if you have any questions let me see if I have any. I think I have a few questions around. You are left with only 5-6 minutes but let me try to see. The first question is from Karthik. Karthik unfortunately no the algorithm that you are working is not going to be open source because that is part of the TCS. But yes I am trying to work with my management to and my vehicles to see if I can open source that. But that time will tell. Not sure if I will be able to do that. Tata vehicles I am sure there will be but I am not very confident about that. But I am sure there will be because every automotive company are working on autonomous vehicles. But are they using autogrid linux or not? I am not sure about that. What are the service providers for 5G? Service providers for 5G are limited at this point of time as I mentioned earlier because 5G is just catching up. And you will see a lot of service providers which will be coming up in next couple of years coming by. Service providers would be Vodafone, AT&T, Verizon. They have started deploying 5G network across the world. For that matter even if you look at Japan, Rakuten is one who is deployed 5G network. Similarly we will be seeing a lot of 5G network in coming future with the support for B2X as well. So they are implementing, I know, B2X in the network and they would be supporting these kind of use cases for sure. Any projection on when we can see 5G devices in India market? When you say 5G devices, if you are referring to mobile phone, I think Vivo has recently launched one in Indian market. I believe there is another one. Even OnePlus has launched one. Samsung has launched 5G devices in India market. I think a couple of them are already there. But it will take some time for India market to adopt 5G because as you might have read that the spectrum itself is not assigned. So it will take couple of years before I can say, OK, my India market is 5G ready. Eventually that will happen but it may take some time. I think we still have three minutes left. OK, if no more questions, I think we can stop broadcasting. So OK, I will wait another two minutes before I stop broadcast. OK, I think that's no more questions. I think we can stop the broadcast. Thank you everyone for joining the session. Stay safe, stay home and take care of your loved ones. Thank you so much. Bye.