 My name is Federico Lisa and my friend here is Gianfranco De Jong. Today we are going to try to do a presentation, too, just to change the rhythm. Let's try it out. I mean, he's the youngest one in the company. I'm one of the oldest, so just to see how it goes. Not me. Just a little bit here, I think we'll try to talk about Ablux. When we do Ablux, maybe you haven't heard about that. We have won tons of different awards in different fields. We won especially in autonomous vision. One of our products that you can see up there. Of course, I'm showing this image because we're in the right, on the left. In this series, on the left, this is the state-of-the-art CB4 image of the day for recognition of detection in autonomous vision. We're on the right. What happened? We specialized on difficult cases. Like, I don't know, snow, rain, ice, fog, that kind of things. Most of the people that do CB, like the one on the left, they work in California. In California, I don't know, I would like to work in Canada. They don't have those problems, so we specialize in these kind of things and we are outperforming these guys by a long shot. That's why we have all these awards. We are here in Montreal, but we have offices in San Francisco and in Munich. Right now, we're 30 employees. We are double-sized last year. We are double-sized this year. And we are very active in academia and industry. We are here. We have been presenting papers in all kinds of different forums, conferences around the world. Last year, we did our round A series for 10 million. And I don't know if you know the first guys there, GM, they pick cars. They gave us a big chunk of those 10 millions. But then we are back by the big guys in the industry. So, whenever you feel like you want to participate in the next generation of computer vision for internal vision, come to see us. Carina, our human resources is just there. A couple of our developers are there. There are developers here and in here. So, today we're going to talk about this little thing. The owners of this model is GF Amesa. All right. So, what is EMVA? EMVA stands for European Machine Vision Association. And it's basically a standard for measurement and presentation of specifications for a camera used in machine vision. The talk today is more about the camera simulator within the EMVA 1288 module. And it's going to be about how and why you want to use it. Am I speaking loud enough? That doesn't work? No, it works just like that's for the room. That other one is for the recording. Okay. Is it better? Okay. So, Federico, can you go to the next slide, please? Yeah, we're going to do a lot of that. Sorry. This one is mine. Go ahead. Go ahead. No, I can't do it like that. It's okay. When you have a camera, this is a little bit of physics behind the simulator. What we're trying to replicate in the simulation of a camera simulator actually is a sensor simulator more than a camera is the process between the photons arriving at the pixels and the digital value that you get in your image. Let's say your pixel value. The digital process goes more or less like this. The photons arrive at the pixel. Then it creates electrons. From those electrons, we convert that into a voltage. From that voltage, we amplify it. Then we memorize it and then we get our digital number. That's the digital process. Easy peasy, right? It just took us thousands of years to figure it out. Now, how the framework works to simulate that is basically there's an image that's going to go through a pipeline like this if you can call it like a pipeline. There are two major steps that you need to do if you want to do that. The first one is to actually convert an RGB image that says you want to work in color to a spectrum. What I mean by spectrum is basically what is physically your color. It's a bunch of photons at different wavelengths that arrive on to your sensor. That's what I'm going to call a spectrum, basically. The camera, similar from EMBA, will calculate everything related to the number of photons that represents to the conversion of electrons going to make to the noise that is relative to that. Finally, you're going to get a grayscale value because all cameras, including color cameras, just a single value that then needs to be computed to maybe a color image or something. That's basically the base workflow we're going to go through. The first step is just converting your RGB image to what we call a spectrum. There's a lot of people that have worked on that and what you need to know is that it's not trivial and there's not one right answer. Basically, you need to go from three values of RGB to a continuous spectrum of, well, maybe infinitely high number of values. Of course, we're going to limit ourselves to about 100 values per pixel, but nevertheless, you lose a lot of information. For our needs, we just need to suppose a bunch of things and we're going to use a method that is as simple as possible to be able to capture the spectrum with a daughter camera afterwards. So, Federico, can you show the notebook? Very quickly, because this is technical and I know we're the last presentation, so I don't want to put you to sleep. What we basically do, or if you want to use that camera simulator for color purposes, you need to do as follows. After you load your image and do your imports, you need to convert an image. In our case, this is our image. If you're wondering what it represents, it's something we work with at Algonux to adjust things. We're going to talk about that later. Yeah, of course. So we're going to convert that to the spectrum that we were talking about. Basically, what we're going to do is we're going to suppose that the r, g and d values are actually normal distributions centered around specific wavelengths that represent the red, blue and green values. And then we're going to multiply each value of each pixel by that new array that we just created, and this is going to give us our spectrum. So if Federico is going to run the notebook and you can see these are the three normal distributions that I just basically chose yesterday. So there's nothing very scientific about this. It's just my choice, and you can basically do whatever you want. So that's not really the matter of this discussion. And if you want to know what the spectrum really represents, let's say we want to look at one pixel of the image that we had and we're going to look at its spectrum. And this is going to be the result. When we run the cell. So you can see that each value r, g and b just multiplied and added the different normal distributions that I just presented. It's pretty simple. And all the stuff, this is all it does. So this is what's up for the conversion to spectrum. Okay, just to make clear, in real life, you get the only way around. What you get from real life is a spectrum that normally is continuous. It's not little pixels we're seeing here. But this is amazing. This is a really good approximation. We can do it even better if we modify our RGB distributions. But quite making more complex or something as simple as this works. Sensors normally have a little layer on top, especially specifically color sensors. That little layer on top of the sensor is called the layer pattern. It's a little color filter on top of each pixel. As you can see here on the little blue-green and red pattern, each pixel either has a blue filter, a red filter, or a green filter. What does it mean? It means that pixel is going to only see green colors, reds, or blues. And that's how we affect the quantum efficiency of the pixel. The quantum efficiency of the pixel is the percentage of photons that are going to be transformed into electrons. And it's depending on the wavelength of the photon. And so in the framework there's actually an object to do that. It's all the QT. And what it basically does is it's going to set the quantum efficiency for each and every pixel of the camera. In our case, there's a function that actually generates that for you. And it's very straightforward to use. You can see it here. You just plug the wavelengths to the wavelengths you want your filters to have. And it's going to tie up a pattern throughout your entire sensor. So this is, for example, three of the quantum efficiencies that we gave to the sensor. There's the blue-green and red. And you can see, I think, right away what it's going to do when you're going to capture the spectrum from the RGB image. And so the reason we want to do this is because we're going to be able to change a lot more things about the image that if we were just taking a random image that already exists and just do some processing or add filters on it. This is going to be this way we can change the way the image is literally captured. So we have more information about the image or we can change more things about the image so we can have a better information about that for any post-processing we want to do afterwards. And so, for example, if we just run the grab method of the camera, well, you're going to see that we get an image that is pretty ugly like this. Just be careful because this is the Bayard image if you want. So you can see here that it's basically all the pattern that we dial up to do the Bayard filter onto our sensor. And so in this area you can see that each pixel that had the red quantum efficiency pattern is brighter than all the others because we're in a red spot basically. This is a zoom of the first 10 pixels in the top left corner of the image. So that is a little bit redder that part. So the red component, the pixels with the red filter are going to let pass more photons than the greens and the blues. And so what you can do is you can use OpenCivay to just debare your image and you're going to get back what you had in the beginning. Okay, and here is the sensor. The simulation of the sensor is over at that point. What we're going to show here is just extra that you can use to play with it. This denaturing is part of the ISP, some type of ISP is in hardware, some type of system software. The ISP is a little cheap, a little device that is in your cell phone between the sensor and the display that actually gives you colors, remove noise, adds sharpening, makes you thinner, younger, you know, you can't handle it. And actually it's true, I mean you turn on the ISP to make it younger. Sadly. So, what we do here is just doing a little bit of denaturing on the image and we can reconstruct the same image that we had before. The colors are not exactly the same because the red pattern that we put is not the same and we have no noise because the sensor has all the noise simulation corresponding to the electronic and physical interactions of the photons and the electronics inside the sensor. If we want to play a little bit, here for example if we do another camera but we just change the filter a little bit shift it to the right a little bit we're going to get an image that is a little bit red that's completely normal and those bigger filters are physical properties of the sensor. So you have to really specify that those numbers as you have it in your real sensor that you are trying to simulate. Just to give you a little preview of how long does it take. It's 244 milliseconds for a small image. This is 350 by 650, something like that but it's a new laptop so it's pretty fast for what it is. How do we use this? Right? Because really nice we have a sensor that we can simulate we can have the camera and we can play with it but as what do we do with it? One of our products actually the product that we work on is it's called Atlas and it's an ISP optimized. So the ISP has hundreds of parameters that we have to find the right values of those parameters so the image is nice it's good looking according to some KDIs. We use it instead of genetic algorithms we can swim and look for the right spot on the ISP parameter map for the best quality purpose. So in real life we do this with real cameras and we do it with a simulator. We have a display we put some images in there then they have the lens that the lens normally creates some distortion because that's what the lens do and then we have the camera that has the sensor and the ISP where we can change all the settings and all the parameters of all of that. Just to give you an idea here is one of our runs for one ISP that then for one ISP simulated ISP here we have one KPI now let's say some gamma per meter and a little less so you can see here the image at the beginning it means the ISP is really not what's set but we can see also that the KPI is not even that here we go around the last the image is a lot more decent than the previous one you might ask why it's so noisy the image you can make better images on that yes we can but the idea here was to use a high gain in the sensor so we can simulate all the conditions as we have in Canada because we don't have enough sunlight so this is actually to train the ISP to be the best possibility can be under those conditions so of course as always we're high ISP because we're really hot and we have a hot source connection we didn't know if it was a picture of the team of the three people or the source they were looking ok thank you everybody yes we're actually using it to test our software that we use and not go outside so if we only use images we can't really I mean change ISP parameters because well there's none to change but with that we can so we can test our entire software and then we can test our entire software so we can develop faster and then when we get real cameras well we have other problems but the development of the software is already done so we don't have that problem actually here who works with hardware one two ok we have three you work with me so it doesn't count so when you have hardware for the software it gives you weird problems that you can not always reproduce it's really different than working with software so if you can figure out the most of your software problems purely software even better so that's why this camera simulator helps us a lot and it's actually not that bad it's pretty well the real the real physics of sensors well thank you for listening thank you very much