 Hi, so it's very good to know that we are live. Okay, so this is Dimitris Anastasiu. Hello from Merida Labs. I'm leading the product growth team. We're doing embedded Vision AI software and we have developed a platform to help enable the development and deployment of Vision AI at scale. This platform is called Perseve.ai and I'd like to show you more here. So in order to develop a good Vision AI solution at the edge, you need to have a good understanding of all of the steps involved. First step being the Vision Twin part where you have the modeling of exactly what goes on in a scene in 3D, so you can select the right module from the camera and lens and optics. So this example is the Traffic Flow Monitoring Sensor that we have created for the Smart City context together with Renaissance. More on that in a little while. The next step will be to annotate and actually choose the right classes for your data. So through our platform on the cloud, you can directly manage all of your data sets. As soon as you're done with that and you know what you're doing, the next step is to start creating models. So first of all, you can choose and create a new model here by choosing the name, the description, the task type. You can select your data sets and make sure that you use the right data sets to create a model and then you choose the deployment platform. In this case, we have selected the Renaissance MPU DRZ-V12. As soon as we have created a model, you can directly see how well it performs right on the cloud after it has been trained. You can select a different model or you can deploy a different model on the edge device. That way you can easily understand if something is going well or not going well. And of course, the final step is to get metadata directly from the sensors. So you have deployed sensors on the field, but you don't need to see the images. You just need to get metadata. In this example, we have a traffic flow monitoring application, so we're counting cars per minute. I'd like to show you a little more on how that works. So here's the real-time live demo of exactly what goes on. We have developed this solution, this Vision AI sensor together with Renaissance. It has a tiny ML engine and what it does is in real time it counts the number of vehicles that are crossing a very specific lane. This is the sensor-level dashboard. This is why we're seeing here also the imagery together with the metadata. And of course, as shown previously, this is directly sent to the cloud. This platform, actually, this solution is meant to run at very low VAT consumption. And it's meant to be used in the smart city context. And it runs our tiny ML engine inside Perceive AI compliant. It interfaces with MQTT so it can be directly integrated into a third-party platform or maybe digest the metadata directly to another software provider's infrastructure. That's pretty much the summary. Nice. And what's there on the wall? What are you talking about here? We're actually discussing about different use cases that we have deployed over the years. So some of them, okay, now we have hit the part where it just discusses about the platform, but the idea is that you have a lot of different markets where vision AI can be relevant. So this example that we showed previously had to do with smart cities. But at the same time, we're also, let's say, market agnostic. We have developed a lot of solutions for industrial players. For example, in the industry for context for warehouse monitoring, for inventory management, for track monitoring, or also understanding what goes on in retail in terms of product recognition, self-interaction inside specific stores, perhaps some consumer analytics. So all of these are enabled through our perceived AI platform. And this is the thing that glues everything together from conception all the way to deployment down to the edge. Cool. All right. Are you like a new company? You've been working a long time on this stuff. We have been working for quite some time. So we have been in the embedded vision field for, I would say, more than 10 years now. So we have a very good expertise of what's going on in embedded. This is why we're able to deliver solutions through our platform in a very timely manner. That said, we have been working in AI more than five or six years now. And the latest offering is, of course, our platform, which enables also other developers, aside from our company, or together with us, with engineering services, to provide very good products. And what's the collaboration with Renaissance? We're working tightly together to create the vision AI sensor. So we're working for bringing this new product to the market and actually through the business cases that we're working together to be able to scale in thousands this specific solution. Some of these newest ARM processors get even the Cortex-M stuff and microcontrollers are getting extra performance to do all this kind of stuff. Actually, yes, some of that can be offloaded there. It can be done in many applications. Sometimes you need an accelerator like the RZ-V2L, but indeed some of the applications can be offloaded to ARM Cortex architectures. It's depending on the use case and if you have the right expertise and we're happy that we do, you can go very, very low and do very nice stuff with vision. Where are you based? In Greece. We're 30 people strong, growing quite fast in the past few years. In where? Sorry? Where in Greece? In Greece. So the headquarters are due westwards from the capital, which is Athens. The city is called Patras. It's right next to the university, but we also have our offices in Athens. So some students are coming directly from university? Yeah, we have had quite a few young talent joining us. Now we're 30 strong, so it's 30 people, most of them engineers. But yeah, we have also our senior people working for quite some time on this field. Now a lot of people talk about AI, machine learning, vision, computer vision. And you are at the forefront of this? We consider ourselves a leader. We have been in that for quite some years. And you know, we care about bringing good quality products to the market. So we have deployed mass market in China. We have very good collaborations in products that plan to scale in the many thousands. And you know, there is this buzz word, but there is a huge gap. You have what we call the CV tech depth. This means that it's very easy to deploy an open source solution actually to develop an open source solution by using some of the models that are already available. But productizing, you know, and actually creating a product that does solve something that's an entirely different discussion. You need to understand how to deploy in production. And for that you need to have an understanding like the one presented previously. You need to understand end to end what's going on. You cannot just assume that you have the best images and assume that these images automatically, you know, reach you. In most of the cases you have to deal with diverse scenarios, with different environments, being able to adapt, having the infrastructure to get the right data and at the same time adapt to the environment. And also meet the below materials and the performance requirements. So there's a lot of discussion to be done there because the buzz words are nice, but we have to have products hit the market actually.