 So basically, what I'm talking about here is... Give me a minute, please. Yeah, I'll just set the tone. You can just put it in the background. So, yes, what I'm talking about here is why we need artificial intelligence to start with. So, you know, the scarcity of talent and human resources, you can start from slide two. So, if I take a typical use case, you know, case of India, over 100,000 schools have just one teacher. There's a very poor, you know, patient-to-doctor ratio. So, you know, the human talent is scarce, and AI can actually help, you know, help it reach the next 6 billion people who are not, you know, having the access to it. Next slide, please. So, across the sector, you know, AI can help scale the scarce human resources. Next slide, please. So, in a typical AI development cycle, right, we do realize now that, you know, it is important that we use artificial intelligence in a very, you know, good manner, wherever required, for giving access to many more people. But in a typical AI development life cycle, we have these four stages. It starts with that data acquisition, writing your architecture file, training it to get your weights and biases, and then deploying it. So, this is the deployment phase, which I am talking about here. And one thing we need to realize is that, you know, the training environment where you train and actually build your model with your GPUs is not the same as your deployment environment. Your end users will not have the same GPUs there. So, it's very important to treat both of them differently, and the deployment phase is what I am going to talk about today. Next slide, please. So, think of a scenario that, you know, that human body works as a sensor, and think of that you are falling, and to make that information reach your brain as a central system, and then you give those sensor information to the brain, and then it gives back. There's a lot of latency. If we keep working in the way that we've used today, the cloud way, there's a lot of network latency. Next slide, please. So, okay. So, what is edge AI? So, what I am talking about here is, for all those, for the AI work, we started with GPUs, and then we realized that, you know, GPUs are expensive. There's a lot of application for deployment where we don't actually need a GPU, but when we add it in the end products, it's very difficult to position in the market because that one component is extremely expensive. So, people move to different cloud, you know, solutions because it's pay-as-you-use, subscription-based, and then what happened is now it comes with its own set of problems and limitation that is, you know, privacy security because it's going out of the network. There's network latency. There's bandwidth issues. Next slide, please. So, why edge AI? Quick response. You have everything central on the node. So, like, latency is less. You have better security and privacy. Everything in the network. You know, you have lesser bandwidth requirement because you have less data to send. You don't send just everything. Only important stuff you send to the cloud or the central node. Less communication cost because of that reason, reduced dependency on the network, and reduced power consumption. Next slide, please. So, why should you care? So, you know, right now from 6% in 2017, the number of devices with edge AI functionality will increase 7% to 43% by year 2023. Next slide, please. So, the next slide, please. So, I'll be telling you about two things which can help in deploying your applications on the network edge. So, first thing here is the neural compute stick. So, this is a tiny, very low form factor device. What happens here is you go train your model anywhere. Like, training environment you're not restricted. You go, you know, use a discrete GPU, have a cloud instance, and train your model. Now you have, and use whatever framework you want to. A lot of people now have a problem with, you know, being comfortable with different framework. If you are an organization, someone will be comfortable in, you know, Pytod, someone else in Cafe, someone TensorFlow, maybe. So, what happens now is... So, again, one thing I'll be showing you in the next slide will be the open Vino. So, what it does is you give your model in any of these frameworks. At the end of a desired, intermediate representation, that is the XML and bin. So, I have your model architecture from that whatever model you want, and I have your weights and biases as a binary blob. And then, once I have the XML and the binary blob, it becomes framework independent, then and there. Now, again, I can just deploy it to any framework that I want, be it a CPU, a GPU, a myriad, which this is a myriad by the way, or FPGAs. So, it is, you know, becoming both framework and platform independent for you and your developers in a complete manner. For this device, this is a neural compute stick. So, it is just $99. And what happens is, you know, you just offload your model here, and all the processing happens here. So, even with the Atom, Celeron, or a Raspberry Pi for that matter, you can just infer your deep learning models. So, just imagine this, you know, something like a Celeron processor and a Raspberry Pi is now able to infer your deep learning model, and you don't actually need to go through the limitations that you have with the cloud. So, to end this, there is no one right platform, but, you know, be intelligent and smart enough for choosing the right platforms for each component of a product. So, you just don't need to just have a GPU and continue with it, or a cloud and continue with it. So, there's not one right platform. Just think that, you know, at each phase, which one is required, and make use of it. So, next slide, please. This is the picture for the OpenVINO. This is Open Source Tool and free to use. So, you have your trained model of these, any of these frameworks. We'll have a model optimizer. We'll convert it to IR. So, you have those XML and VIN. Again, a lot of layers that is there in training is not actually used in inferencing. So, we automatically take care of it in a black box, and, you know, those layers are removed and optimized. So, you have an optimized graph now, which is, again, to repeat both framework and platform independent. And then, using the API, you can just put it on any of the platforms. Thank you so much. This is Mashen from Intel. Thank you.