 Hi, welcome to the embedded world 2022. It's me, Bharat from Econ Systems, a marketing manager. And let me introduce our colleague, is Abdul. Hi, this is Abdul from Econ Systems. I look after the new business for Europe and UK region. And let me introduce one of my colleague, Taruna. Hello, my name is Taruna. And I'm handling key and strategic accounts here at Econ for Europe region. Nice. So far, the show is pretty good. And we have a lot of interesting demo. My colleague will explain you, she's working through all the demos. Hello there again. So we have today, on the first demo here, Sony Starvus range IMX485 base sensor. So this is a 10-bit raw Bayer sensor. And this is currently running on the AGI Xavier platform. But within a week's time, we will actually be launching this camera on the Oren platform. That's the world's most powerful embedded platform right now. Next, here is a SturdyCam 20. This is a GMSL2 camera. And you can connect up to a 15-meters length where your requirements between the camera and the processor should be more than 3-meters or 15-meters. This is the camera solution for you. Can we check the descriptions you have here? This is running on the Oren kit. The GMSL2 camera is running on the Oren kit. All right. Our next demo here is actually targeted at the medical industry. And this is for the pre-analytical IBD in vitro diagnostics. This camera is based on the AR0521 sensor, which is a 5-megapixel sensor. What this can do currently is identify the colors of the test tube and tell you what color it is. This camera is probably the smallest AI-powered camera because it has an AI chip inside. And this is actually a working progress. And what this can also do is identify the tray and say, if a test tube is present or not, and even if it is there, if it is empty or full, I'll leave you over to Abdur, my colleague here, who is taking care of the new business development. And he'll explain you through the other demos. Thank you. Hi. So the next camera in line is our time-of-flight camera. It is the TOF camera, which we like to call. And it has two cameras, basically. One is for depth sensing and one is for RGB sensing. And this is a 2-megapixel camera, which is used for RGB. And this is a VGA camera. And these are the light sources which send the pulses for calculation of the depth. You can see here, this is the display. This is our interface that we've developed. This camera works in two different modes. One is the near mode, which is from 0.2 meters to 1.2 meters. And you can see here, as you go closer, this is a 3D point cloud which is created. And based on the distance, you get different colors. This same thing, when I change it to, say, the far mode, it works from 1 meters to 6 meters. Now we can see more information because the range of the camera has increased. It is now working from 1 meters to 6 meters. So this is a big demand for time-of-flight? Yes. And all kinds of applications? Yes, yes. It is basically used for depth sensing, wherein you want to create a 3D image or something like that where you want to get the RGB information, as well as the depth information for each of your pictures. So that is the solution for depth sensing that we have. What are the coolest ideas you've seen with this kind of solution that could be done? Is it robots driving, or where do you put it? Yes, it is primarily focused on autonomous mobile robots, which are the AMAs, which require a lot of depth sensing to be done. So that is the targeted application for that particular product. Nice. What are we seeing here behind you? Yes. Behind me, you have this 180-degree stitching camera. I mean, it's a three-camera setup, basically. Yes. Here. This is one. Here you have three cameras arranged in an inward manner to cover a complete 180-degree view. And these three cameras are based on 13 megapixel sensors, which is the AR1335 sensor. This is connected to the NVIDIA AG Xavier here through MIPI interface. And this is the 180-degree stitched view for the camera. So it's stitching using a three-camera setup. I think you can. Smartness in there. Yes. What is the technology you have to develop to stitch? It's basically three cameras taking three different images. There will be some overlapping regions between these images. We'll have to cut down those overlapping images and create a combined image of 180 degrees using a three-camera setup. Nice. So it has to be synchronized. All those images have to be taken at the same time and all of that. There's a lot of cameras in this. Yes. This is a range of cameras. Not the complete range, but some cameras that we have. This is the micro camera. Yes. So we have a wide range of cameras, see from one megapixel to 16 megapixel. Some of the cameras are on display here. You can see the Nile camera solutions. You have two Nile camera solutions. These are based on the GMSL interface. Then you have the AR1335-based auto-focus camera here. Do you have it open? It is. I mean, it is locked right now, so it's connected. Yes. So this is the AR1335-based auto-focus camera, which is a USB 3.0 camera. You have various other cameras. There's a CU20 family here, which is a very good NIR camera. Again, it's a USB 3.0 solution. You also have cameras with the enclosure. This is a global shutter solution, the C3-cam 24 CUG. So that's about our range of cameras. A lot of cameras. These are not all the cameras, these are just some cameras. Yes, we supply cameras all over the world. And all these logos. Yeah, we have a very strong partner ecosystem. So for any of your ISP needs or sensor needs, we are partners with Sony, OnSemi, and Omnivision. If you have any of the ARM, GPU, or CPU needs, we have a very, very strong partner ecosystem, as you can see behind. What is the important thing about partnering with the SSE vendor, you get to optimize everything? Of course, of course. We are NVIDIA's allied partners. So being NVIDIA's allied partners, we have access to their internal ISP. So any sensor that you would want to work with, even if it is not on the roadmap or portfolio, if that is what you're looking for, we would be able to fine-tune the sensor and design, develop, and manufacture the camera modules just for you. We also offer custom solutions, yeah. And when you say you work with Sony, they provide so many camera sensors, you just take them and use them. Yes, we have an R&D team who designs, see, there are about three sectors that we actually target. One is the industrial, retail, and the medical. So our R&D teams and engineers work along with the partners and then decide which sensors is going to go into which of our applications and business cases. So whatever that is better suited, that's what they'll select and work with. And there's also board makers, PCB designers. Yes, of course, of course. So we have more than 350 plus engineers working across R&D support, custom camera development, product development as such, services, custom solutions. So that is how the entire employees are placed, engineers. And where? We are from India, south of India. It's a city called Chennai. We are headquartered there. We have our manufacturers in Delhi, contract manufacturers in Delhi. And we have our sales offices in US. We also have our European representative here, our European distributor is Silicon Highway. So they are our European distributors here. And the company history, many years? Yeah, we've been in the business for almost more than 18 years now. We started off as a design house and slowly we realized that there is a huge gap between customers who need smaller volumes of cameras and then there are the sensor providers or other component providers who demand huge volume. So we decided to act as a bridge between the two. So if a sensor provider demands that you need about a 10,000 unit volume as a MOQ, we get that 10,000 units MOQ and then we give it, we design, develop and manufacture the cameras and we give it to our customers who need just 500 units or 1,000 units. So we act as a bridge between the small volume customers. Right from small volume, we also cater to about 300K volumes per year. So the volume's not really a. And computer vision on the edge is a huge topic right now. Of course. And you're doing just a lot of work in there. Yes, so we do have a spin-off from our company. It was actually a separate division. Do you want to join us? So, yeah, we have a spin-off. It's called VIST AI Labs. So they are the ones who actually work on the AI and ML needs. So if there are any AI, ML needs, we will be perfectly capable of taking care of them as well. Oh, awesome. How does the embedded world, how's it to see real people again? Oh, it's lovely. It's fantastic being here after almost three years time, I believe. So because of the pandemic, we were not able to be here for the last couple of years show. So after almost two to three years, meeting the customers here in live in person brings a lot of energy and enthusiasm. How was it like to stay connected with people during the pandemic? You just did a lot of video chat? Yes, yes. It was always virtual and video chats all the time. Teams calls or Zoom calls. That's how we've been managing to stay in touch and meet each other. Cool. And now, just a whole bunch of new projects. Everybody coming and saying, I want to do this, I want to do that, something like this. And you can supply the program with the components. Oh, of course, the whole world's facing a supply chain issue with regards to semiconductor. Pandemic's not been very kind to our industry. So there is a global supply chain issue that is happening. But like I said, we have a very, very strong partner ecosystem and we have a very resilient supply chain. So we are ensuring we get the forecast from our customers so we can cater to their needs also. We understand the timelines that they're working with so we are able to help them meet their time to market. Cool. Thanks a lot. Thank you so much. Thanks for showing everything. Thanks. Cool. Thank you so much for stopping by. Hope to see you at the next time at Red World. Cool. Thank you. Thank you.