 This module is about DMS or in-cabin monitoring system, driver monitoring systems. My name is Jim Kath. I'm a senior marketing guy at ST in Santa Clara. So DMS, it's called driver attention monitor DMS. There's a number of different names for it. It's essentially what is derisively called a nanny cam. These are systems within the car to see what the driver is doing. So generally you have some kind of a sensor that looks at the driver, driver face, driver head, and that way it can tell the computer, whatever computer you have in the center stack or in the car that the driver is attentive or not attentive. So the idea here is to remove any capability or possibility that the driver is not looking at the road. So it can look at where your irises are, where your head is tilted. So it can tell if you're looking at a cell phone, if you're looking at your child in the seat next to you, if you're looking at your wife, if you're looking out the window, or if you're asleep, or if you're drunk. All of these things are what we're trying to accomplish by having a driver attention system. Typical driver attention system is some kind of a light system to supply energy for the sensor. Then we use the sensor to pick up on what is going on with the driver. So as you see in the bottom right-hand corner of this picture, this is the output of our proof of concept driver monitoring system. You can see we use a particular piece of software which tells us where his gaze is directed, where his head is directed, and you can also tell where his irises are. So this way we can tell, no matter what his head is doing, we can tell what his eyes are doing. So you could be looking to the left, but still be, that is your head can be turned to the left and still be looking to straight ahead. This ensures that there's very few false positives. All of these things can be adjusted. This is essentially the output that would go to your system that manages the car, which might buzz your seat, buzz your display, somehow tell you that you're not paying attention and the computer notices it. So there's two types of systems in the car. There's driver monitoring, which you see on the left. You can see that the sensor is looking only at the driver. This is called the head box. So you might hear somebody say, can I focus in on the head box? This is what they're talking about. Directly from the dashboard to the driver. On the right, you see a cabin monitoring system. Now on the cabin monitoring system, you see a wide angle lens here on a sensor and the idea is to be able to see everything in the cabin. So you could tell what is going on with a passenger, what's going on with the driver, what's going on with the dog, whether there's something left in the back seat or whether this is your child in the back seat. So this is basically not set up to talk to or to go through that software we showed you in the previous version, previous page. This is only to show you what's going on. It will not generally monitor or tell you that there's something wrong in the cabin. The driver monitoring system does that. Looking at the driver monitoring system, you can see that generally the sensor would be in the front of the driver only. It would have a narrow angle, a narrow field of view to enable you, the sensor to focus on the driver. So it might be as wide as your shoulders by the time it hits your face. And cabin monitor would be generally somewhere higher in the center stack. It might be at the top of the dashboard or it might be above the rear view mirror. And it detects all the occupants and what they're doing. And it can generally, depending upon the software, of course, it can classify those occupants. It can classify the objects. So you could say, okay, he left his backpack in the back seat or the dog is in the back seat. Oh, look, I forgot my baby. I better get my baby before I close the door. All these sorts of things can be done with this software. By the way, this is pretty much going to be standard in your car anytime after probably about 2024. It will be in all cars. So the system can be used, DMS, CMS all in one camera. So let's assume the center spot here that I'm pointing out is the DMS, OMS or DMS, CMS camera. With our capability for region of interest, we can say to this wide angle lens, we can say look only at the driver. Do not pay attention to the rest of the vehicle. Therefore, you get more pixels on the driver and you get a better resolution on the driver. That would be this little block diagram on the bottom here with the driver's head. The same sensor when you change the region of interest to the entire width of the dashboard would see the entire cockpit of the car. So that would be this picture on the left. So we use near-infrared imaging to do this. So the customer, that is the driver and the occupants of the car, never see flashes of lights, never see any way that this camera is looking at them. And we use a global shutter to do that. When I say global shutter, I want you to understand that a global shutter is a sensor that takes the entire image in one instance versus a rolling shutter, which is typical in your cell phone and in most digital still cameras. A rolling shutter collects data in rows. So it collects the first row, then it collects the second row, then it collects the third row. So you can tell that by the time you get down to anywhere past about halfway here, you're way off in time. So this data down here, which says line 998, would be much later than the data found in line one, two or three or any of the other lines previous to it, which is what ends up being this picture of the fan blades here on the upper left. So that's why when you see old photographs of things moving like old race cars, they always seem tilted. It's because the front of the car arrived and was hit by the first line, second line, third line. But by the time you got to the middle of the car, you were way down here, so you're way past it. So therefore, these parts of this image were taken at different, what essentially amounts to different times. A global shutter, on the other hand, which we recommend and most people are using for a driver monitoring, cabin monitoring, takes the entire frame in one pop. Therefore, we get everything at one time. Therefore, we don't have any, what's called a temporal problem, which is a time problem when we use a global shutter. This is very good for collecting objects moving quickly. So therefore, it can collect even your blinking, which is basically a picosecond issue. It can collect your fingers moving, your eyes moving, your iris is moving, your blinking. All of these things can be caught with a global shutter and cannot be caught with a rolling shutter. So therefore, a global shutter is the best solution for this. Our sensor is also high dynamic range. So this is to gather a valid image in situations where the conditions vary within the frame. So if you look on the left, here's a 68, our sensor at 68 dB. Notice that a lot of it's blown out. A lot of the details are gone because the sensor is overwhelmed by the amount of ambient light coming in through the window. So you're overwhelming the picture of this gentleman here. You're basically overwhelming. You could never tell what his head was doing. It looks like he's looking at you, but he might have his eyes closed. You don't know that. And anything in the back window is invisible to you because it's all blown out. When we go to high dynamic range on the right, you can see at 98 dB, now I can collect all that data that I couldn't collect before. So I can see directly, I can see through the rear window. I can see what this gentleman here is doing. You're seeing he's looking directly at you. He doesn't look very happy. And this guy, you can see exactly what he's doing also. So this is one hit on our sensor, 68 dB. Got a high dynamic range. The second hit, 98 dB. So we add 30 dB when we go to high dynamic range. This is available without any issues with frame rate or any other thing. Since these are all taken in the same frame, there's no temporal problem. There's no ghosting. There's no jitter. There's no nothing when we use high dynamic range. We also enable you to get full resolution near-infrared images. This uses the same algorithms, but reversed that we use for high dynamic range. In high dynamic range, we add two images taken in ambient light to get 98 dB. In a near-infrared solution, we collect near-infrared on and near-infrared off in the same frame again. So there's no jitter or no anything else. You can see here near-infrared on and near-infrared off. Near-infrared on, you can almost see through his sunglasses. Near-infrared off, you cannot see through his sunglasses because, again, it's no infrared energy. With our system, the subtraction turned on, you can subtract one from the other and get only the infrared image, and that's this picture on the right. So this is built into the sensor. So long as you can implement an imaging, excuse me, long as you can implement a infrared illumination, you can do this with our sensor. It's built into the sensor. There's nothing for you to add, nothing for you to buy. This is part of the sensor. This is called the subtraction algorithm. This is all done in the same frame. So again, there is no ghosting, no jitter, no anything when you use this. It's a unique feature to our sensor. It removes a lot of hardware requirements on the back end after the output of the sensor because you're already sending a full up image subtracted for you. Our sensor also has high MTF. High MTF or MTF itself is the ability to resolve an image. So in our case, we have a very high MTF and the MTF is very good at one meg and at the full sensor width. So here you can see in the near infrared case, we have a field of view 45 degrees by 36 and we're picking up one meg, no problem at any distance. So that is the green triangle here. This is the head box. This is where the sensor is pointed at the driver only. This sensor doesn't necessarily have to be directly in front of the head box. It can be the same sensor as the one used for cabin monitoring, but in this case, it's shown directly in front of the driver. And then the blue triangle is RGB infrared. So this would be what you want for anything happening in the cabin. These are all available within the sensor. Again, nothing to be added by you. And you can see that the one megapixel DMS solution is actually equal to a three meg OMS solution. And full HD OMS is available via our system. Again, we get enough pixels because the sensor can resolve the image. We can use less pixels to collect what other people are collecting at four and five megapixel. So we have a number of customers who are using our sensor, which is a 2.3 meg in this case in a solution that was originally specced at four and five megapixels because the MTF is high enough that you don't need to have the bigger sensor. That is the MTF is the ability to resolve the image. So if you wanted to understand this better MTF, of course, you can look it up on Google, everybody will. But MTF is essentially with the naked eye, if you look at a tree that's 50, 50 meters away, your eyes can see individual leaves if you concentrate on the individual leaves. A camera may not necessarily see that. You wouldn't see the edges of the leaves. So it might be just a green blur that looks that your mind creates as having a bunch of leaves in it. But with high MTF, you can see individual leaves and you can do that with our sensor. Here's an example of how our MTF compares to our competitors. If you look on the left, this is a test made by one of our customers. They're using a standard lens, 50% contrast and you can see that our sensor, the 5761, this is the Robin, is much better at MTF than our competitors. So here we are at 90 line pairs per millimeter at 50% MTF. Another way to look at this is on the right, our pixel pitch versus the wavelength at 940, you can see that we are much better than our competitors. These again, we're not done by us. These were done by our customers. How does this work? In general, the optical data comes in through a dual band pass filter and then through this, this is the sensor itself. On the sensor, the color filter array is a four by four matrix. You can see RGB. We've added made this a four by four matrix and we've put four pixels that are imaging infrared or near infrared pixels in it. So what we do is use this array to provide the imaging infrared data and that would be here in the upper arrow and the bottom would be ISP or color data. So since we have two paths through the sensor and they're running in parallel at the same time, you can get a full resolution near infrared image and a color frame at quarter resolution at the same time. Therefore, at 60 frames per second, you can collect near infrared and color data at the same time and no lag, no ghosting, no jitter. This is the sensor, you don't have to add anything. All of this is available standard product off the shelf. To give you a better idea, so I've talked about, previously I've talked about collecting data in single frames. So here's what we're talking about. This is a 900 microsecond frame length. We have 800 microseconds of integration time and 100 microseconds of the second hit in the color array. So here we have collecting most of the data. Now we add another 30 dB. So we get, this is how we get to 95 dB. So you can see there's actually two hits in the same frame at the same time, enabling us to collect all the data when we need it. And additionally, then we have show here the analog converter that collects the data and turns it into an image, the data that the sensor needs. So the total length of the frame here is 8,000 microseconds. This is the block diagram of the sensor. Here we see the dual path through the sensor, double ADCs, both the same width. They both have the same defect correction built into them. From here we go to the merge or subtraction. So this is the major algorithm that provides us the either the HDR or the imaging infrared data. So that's, this is either an addition or subtraction uses the same algorithm, but reversed. No trade-off because of the dual pipe. Again, some of our competitors offer similar technologies, but they do these sequentially, which means you're going to have a ghost or some jitter when you go from frame, when you go from one mode to the other. So you might actually see, looks like a ghost behind the image. You may have seen this previously elsewhere. We have been using merge algorithms. Everybody has been using merge algorithms, but most people do this sequentially versus concurrently, we do this concurrently. Here is the same solution, but now we're using RGB and near infrared. So you can see here that they have to be the same time length for each set of data. So we have 600 microseconds for near infrared and we have 600 microseconds for RGB. Now I can subtract one from the other evenly and have good data in both. Again, these are taken concurrently. So again, no ghosting, no jitter. We have storage built into every pixel. So the global shutter will store two different values, therefore enabling image subtraction and also of course enables the high dynamic range functionality. So how we get clean images with little noise, this is due to our deep trench isolation. This is unique to ST sensors. So these are thermal pictures of photonics entering an array of cells that use DTI. So as you can see here with a chief range of zero degrees that is straight down into the pixel. Each one of these squares represents a pixel. So green, red, blue and yeah, green, red, blue. And what you can see here is because there's deep trench between each pixel, there's no way for either electronic or photonic noise to transfer over from one, spill over from one pixel to the other. This means that we have much less noise than our competitors. This technology enables this both for regular image sensors and in our case for the global shutter. Here you can see CRA, you can see that the deep trench isolates the data so the pixel would be down here and the light entering is straight in. Now, going to 30 degrees, most sensors, most people want 30 to 33 degrees chief ray angle. Our standard is somewhere around 30, we can make custom CRAs if you desire. Notice that even at 30 degrees, there is no photonic or electrical noise spilling over between pixels. What this means is that we would have much less noise. Therefore, we're using our ADC, ADCs for real data versus noise data. Therefore, we're more likely to have correct data coming out or noise-free data coming out of the sensor. This block or this page shows you the general specifications for the Robin Sensor RGB IR. So part number 1762, 2.3 megapixels. It is a small sensor, low noise, and enables both subtraction and high dynamic range. The pixel size is 3.2. It is effective QE of 8% at 940, 21% at 850, and the MTF at Nyquist divided by two is 71% to 79%. And because we have perfect pixel encapsulation, we have this high MTF. There's 2K of OTP on the package and it's available either in dye form or IBGA. So ST is proud to be working so deeply with our customers on global shutter. We have numerous solutions. So we've just talked about near infrared imaging with a flood illuminator. We also have color imaging with a flood illuminator. Again, we're using pixels in both cases. We're in the middle of developing a structured light solution, which would use the same sensor. We have to finalize what we're gonna use for illumination. The illumination would probably be some kind of a either a dot projector or a structured light projector which enables us to mathematically figure out where everything is in the image and then build a 3D image out. This enables us to essentially to use the sensor for object identification, person identification, those sorts of things. We have this working in our labs. It's not ready for prime time, but we intend to continue with this and we'll have something that we can demonstrate probably by CES time this year, which is gonna be a virtual CES. If you have any questions, don't hesitate to contact us. We'll be happy to help you out. Thank you.