 Okay, so let me introduce myself first as being the guest here. So my name is Bohem Razovac, I'm responsible for research and development department at Entity Data Romania. Basically, if you go to the next slide, my company is a partner with RedHead and we are belonging to Entity Group. Entity Group is one of the largest service and IT providing company from Japan. It's a global corporate and basically company, let's say a subcompany from the group from where I come in is Entity Data. So basically at Entity Data, we are focused mainly to service providing and some of our key focus areas are actually the automotive, what we are going to cover today as well. AI, blockchain, cybersecurity, data and intelligence and basically other emerging technologies that can generate a big technical footprint at today's market. So Ranki, back to you. So we've been partnering with Entity as a systematic reader over the past 18 months and specifically in the newer areas which RedHead has come into that would be the edge and OpenShift and Kubernetes being rapidly adopted into these spaces. Entity, we found a perfect partner in Entity where there are lots of industrial edge cases and the automotive use cases which can be hosted on RedHead OpenShift and in a platform and it being a true hybrid solution and it being, I mean, taking your workloads to a multi-cloud and also the hybrid cloud way is the way to go forward. And over the past 18 months, we've been working with various products which were traditionally written for the embedded site to be onboarded onto the IT side as most of the previous speakers were speaking about how to integrate developers at an car or automotive OEM onto RedHead platform. So we kind of used various accelerators in order to adopt these cloud and these practices, these solutions onto the OpenShift cloud. So one of the ways in which we started integrating is we have these validated patterns because there are lots of ways in which RedHead has built expertise over the past one decade about Linux containers, DevOps practices, how to write enterprise solutions on an enterprise-grade Kubernetes distribution like OpenShift container platform. Then there were many aspects of various industry verticals which needed to be adopted adopted in an opinionated way in terms of what would be the right way of getting these solutions on top of OpenShift container platforms. So working with the various engineering and the field solution architects, they came out with various validated patterns in which you can use the open-source software which is built into these validated patterns as code. In the words of Linus Torvalds, it's more of talk is cheap, show me the code. And many developers wanted to look at how to integrate their solutions and how to integrate their edge solutions on OpenShift and in general in the Kubernetes platform and many of the developers are eager to get their workloads predominantly which were in the automotive and the industrial use cases to use a modern distributed software like Kubernetes in their production environments. So these validated patterns and the accelerators which power these validated patterns could be one of the first steps in which both the partner developers and the customer developers can start accelerating, can start adopting their code to be from staging to production environments pretty soon. And onboarding developers is easy with these validated architectures and onboarding projects and getting your projects to products is also is very helpful with these. And all the software which we are talking about like for example here, the accelerator which we used for this particular product was the Red Hat Bobby Cards. So this is a vehicle accelerator. It's a vehicle simulator in which there are various Red Hat products, not just the Linux containers, also various Red Hat runtimes, various Kubernetes operators which are used to get to get your solutions to staging and production. So it's beyond just the vehicle workload, it's also how are you going to take it to production across multiple clouds. So your customers could be using AWS, they could be using the Google Cloud or they could be using Azure or they could be having their own private data centers. So getting these applications and processing these workloads at any of these endpoints, these accelerators make it possible. And also another thing is we believe in doing things in the open. All the source code of this is available on GitHub and most of these are reproducible to handle most of the automotive and the industrial use cases. And we're also a part of various collaboration like Elisa is one of the foundations where we actively participate. And we also collaborate with many customers and partners like in this case, entity data is helping us on both their automotive expertise and their work automotive workloads on the OpenShift container platform. So this is one of the ways in which we use BobbyCard to accelerate an entity in-car solution for human driver perception platform. And one of the gateways which it uses called the multi-sensory analytics platform gateway onto OpenShift. All the code what we're seeing here is workable and the integration platforms, all of these source code is available on GitHub. We'll share the links later. So BobbyCard itself is an accelerator. And we use this because once you're adopting enterprise distribution like Kubernetes like OpenShift, you're trying to integrate various redact components, not just the operating system. You have messaging queues, you have time series databases, you have the authentication mechanism. Again, you need to build your AI ML pipelines on top of your existing cloud and platform solutions. So what BobbyCard did is it was an opinionated architecture of how these in-car solutions can be onboarded and what are the various software components you would need from the vast cloud native compute foundations landscape because there are quite a few software which you need to choose. So what BobbyCard does is it accelerates the process of you focusing on your own automotive code and Red Hat taking the workload and distributing it across multiple clouds. We also use various operators from the Open Data Hub and various Kubernetes operators in order to facilitate onboarding of this process. So I will stop over here and go to the next slide. Boyan, over to you. Thank you, Ranki. So during the following slides, I will try to explain this concept of human driving perception, why we are doing it, what actually we are doing and why we think it is important. Basically, if we take a look at connected vehicles market trends, which was published by Gartner for the last year, we can see that a lot of topics around the metaverse and human-centric design has been raised nowadays. So autonomous vehicles market is a little bit stalling. This is something that many other analysts has also reported. And according to that, we need to find a new way how to approach the clients. And one of the most important parts is definitely to understand the client's needs and to understand them in real time, while the driver and passengers are inside of vehicles. If you go to the next slide, basically not only from the trends on the market, but also the regulations that we have seen have been issued by European Union, but also by the U.S. Senate, they're all requesting to have this kind of mandatory safety features which needs to be implemented and which needs to be supported by the vehicles. So if you take a look at the list of such features, you will see that basically different solutions which are supporting for a driver to prevent from accidents or at least to recognize early signs of possible accidents and to try to improve the behavior of vehicle which will be correlated with the perception of humans, this is something that definitely we want to have. Basically, at our next slide, this is something where we want to show exactly the concept. So, Ramke, if you can just move to the next slide. And this is exactly the concept of human-centered approach that we are proposing here. What we want to do? We want basically to give more feedback from the client. So we know the famous Stasile formula, exposure, controllability, and severity. So these are three main factors that we are calculating today to understand the AC level of certain modules which are developed according to functional safety regulation. But what is missing here? It's human sentiment and perception of certain features. You can imagine a situation that emergency braking will do its job. So the vehicle will brake on time. But how smooth it will be? Will it stress people inside of a vehicle? How it will cope with their emotions and the level of stress? This is something that unfortunately, at this moment, we do not have many possibilities to understand. Also, lane assistance warning and similar operations and similar features by the vehicle. We can just measure the behavior of a vehicle, but we cannot measure the human feedback for that. And this is very important because at the end of the drive, at the end of the test drive, you can ask people to fill the questionnaire. But in most cases, it will not represent the exact moment in time when certain manner happened and when the corrective operation needs to be taken. If you go around to the next slide, this is exactly the solution that we are proposing. We are taking several different parameters all together and we are applying it to cohort of vehicles. Meaning that we want to understand the signals coming from the canvas. We want to understand the human behavior and facial recognition to understand the emotional level. We want to classify the objects in front of the vehicle. We want to know the exact context in terms of weather and in terms of the road through real-time navigation to understand exactly what is happening and why this situation happened. That basically produced certain behavior from a driver or other passengers. And this is exactly if you go to the next slide, showing in this short sequence how we are collecting this data and how we are trying to understand the vehicle to collision risk, which is happening, for instance, during this specific manoeuvre. I must mention here, and I would like to thank to AVL company, which is also one of our partners and which supported us for collecting this data during the large data collection and the large experiment that we did in the previous period. So if you put this all together and if we put it in the context of OpenShift and Red Hat Solutions, we can apply it to the large cohort of vehicles. And this is exactly what we tried to achieve through this concept. Ranki, if you go to the next slide, please. Basically, the whole solution is very probabilistic and we are trying to understand basically our machine learning model has been trained to understand different signals and their change in time. So just from these signals, you cannot understand if you take one signal outside of other, you cannot understand the exact situation and you cannot predict certain manoeuvre or event, but if you put them all together and if you train the model in a way that by monitoring, in our case, it's around 14 signals, at the same time, what is their distribution through time, we can exactly recognize the patterns that in the next 500 milliseconds to one second, a certain situation, a certain manoeuvre will happen during the drive, which will give us the possibility to check what the driver is doing. And if necessary, we can do additional corrective operation through the monitoring of vehicle, but also we can understand how people in general behave in this situation. This is very important because not only to do corrective operations, which are correlated with human perception, we can also do the benchmarking with certain features. And this is what we are trying to achieve. So as our future cases, if you go right to the next slide, please, this is the goal and this is something that we are achieving all together with our colleagues from Redhead. We are trying to apply it to a large number of vehicles, so not only to monitor one vehicle, but to apply the joint understanding, the federated learning, which will help us to improve the general model. And of course, it will keep safety and security of the information exchanged between a vehicle and between the exact device, which is in vehicle collecting information in real time, and the global knowledge, which is stored in the cloud. And this is something that we believe could help to prevent even the hazards and accidents in future, because we will try to understand, we will try to put the human in the center of happenings, in the center of drive. And that way, we will try to make much better interaction between a machine, which is in this case, of course, vehicle, and the human who is driving or being driven in autonomous vehicle. So that would be all. Ranki, back to you. Yeah. And if you're a Redhead customer or a partner or an OEM, we are happy to demo the human driver perception platform, RTU also. So it has been hosted on the GitHub project of the Bobicard project. And then you could, if you already have an OpenShift cluster, you can just run this project and get there and try it for yourself. Bojan, thank you so much for joining us today. And Ramakrishna, this is great. I am almost afraid to buy a new car until all of this stuff gets ready, because it's moving so fast. I just don't know what I should do next.