 Welcome everybody, my name is Laurent Penchard, I'm the CEO of Horizon Board and main architect of the Libcam project And that's what we're gonna talk about today And I'll try in just 40 minutes to show you how easy and quick it is to support for your platform in Libcam So let's imagine the following situation. Okay. First. Let's imagine that this would work. There you go You have a brand new SOC, okay, you're working for an SOC vendor you creating a brand new platform This is gonna be amazing You're about to release that to the public and that SOC has the best camera hardware ever So you want to make sure that all your customer can benefit from that You are about to unveil that to the general world You're really excited about it You've prepared a development board that's gonna be cheap and then that everybody want to buy and Because we all engineers and you extremely good at marketing coming up with names. You decided to call that the broccoli pie, okay It's gonna be a hit with the kids they love broccoli's so You've done your homework You've watched videos in YouTube of previous talks on how to enable support on the camera side for camera sensor drivers For your ISP driver, so you've done all that. This is not what we're gonna talk about today And You have of course decided to use the media control API the v4 to API you're going upstream of the drivers You have not been shy asking for support from the the video filmmakers community because these things can be quite hard But let's assume that all that is done, right? Actually your hardware may be a bit more complex than that So let's add just a tiny bit of complexity and you may have something that looks like that, okay, and You may be a bit concerned. That's gonna be a bit hard to swallow for the users so Fortunately you have also seen Presentations about the camera and you know it is the thing over there that seems to be exactly what I need and you're absolutely right So let's see how the camera can help you The first question Who does not know who has never heard of the camera before? Okay, one hand at least two three. Okay, there's a few great Very happy because the front row here everybody has a leap camera t-shirt. So I have been kind of I don't know I assume you may not learn that much today But yes, hopefully there will be other people in the room who will come out with additional information. Okay, so What is leap camera? Leap camera if you're familiar with the graphics world Or describe that as the Mesa of the camera of the camera world So it's a complete user space camera stack as I mentioned you've your new homework on the kernel side It camera does not touch the kernel. It assumes that we have kernel drivers But it provides a complete stack in user space So it handles things like enumeration of cameras, right? It will automatically figure out what's in your system Exposes that to your applications And it supports for each camera that it finds you the system capturing multiple streams At the same time in different resolutions different formats with different properties And it also support controlling all the parameters of the camera for every single frame that you capture So compared to using a USB webcam and using the Vfold we pair directly you have a lower level a much more powerful control of all your camera parameters We're talking potentially about hundreds of different tunable parameters that that you could set And I think probably needs a new battery at some point. So this is what leap camera looks like At the very bottom Yeah, laser pointer works as well. You have the hardware you've done that Remember first lines you've got amazing hardware and you have extreme kernel drivers on top of that. So the camera is everything that sits on top and Looking at what we have in there the first thing that Can I mention for those who are not familiar with the camera is kind of the entry point? So we have an object called the camera manager Be relatively brief on that because the goal is to see how to add support for you platform Camera, but it's important to understand the architecture This is what your applications will go to to get all the cameras in the system to marry the cameras to see what's there to get hold of cameras and and use them And we do have another component in leap camera Well camera manager first so I mentioned enumeration of all the devices in the system and A new word that we're introducing here creation of what we call pipeline handlers So the pipeline handler in the camera is the platform specific component that handles all the plumbing It's gonna handle communication with the kernel with all the drivers not just necessarily not just the current directly You could have a camera pipeline that That uses a GPU for processing for instance So your pipeline handlers read a piece of plumbing that will take all the all the harder pieces all the all the processing pieces And put them together in platform specific way so Your camera manager has enumerated cameras It's instantiate pipeline handlers that will register one or more cameras You may have a single single or multiple cameras you ported in in your system and We're getting to the part that I mentioned is here The platform specific part so you have on the right hand side in green the pipeline handler will dive a bit into that Because I'm gonna show you how you can implement this for your platform and the left hand side It's the counterpart to the to the pipeline handler part. That's also platform specific and That's all the image processing algorithms when I'm talking about image processing algorithms. You call that IPA In lead camera Nothing to do with the beers although. I'm sure that few people will exit this room were wishing it would have been be instead But those are not Algorithms that will process the image directly. We're talking here about ISP control harder processing a few images But it's all the algorithms that will compute the hundreds sometimes thousands of parameters that need to be set at the hardware level For every single frame that will be processed. So if you're familiar a bit with Control control theory and Things like PIDs for instance, you can imagine like this would be a PID on steroids instead of having one input and one output parameter you have Thousands of input thousands of outputs and this works in real time and that's really the control loop of your system So that's really the hardware specific part As I mentioned the pipeline handler interface it with the kernel We'll figure out All the video for Linux media device anything in these to use and abstract all of that towards your application sleep camera Does not expose anything that's the video for Linux specific to applications The application is in camera does not even see the vehicle to pixel format. It's really completely hidden behind the scene and There we go As I mentioned already your image processing processing algorithms your IP module will Typically consume statistics are computed by the hardware because when you have high resolution high frame-raised computing statistics with the CPU is expensive So ISPs assist with that It consumes them and calculate the parameters to be applied to the next stream if you had to put that in Really simplified way the studies will show all the images be too dark. I'm gonna push the exposure time up There's way more than that so when it comes to the The image processing algorithms and the IP modules. They are a live camera separate modules they run they they loaded at runtime the pluggable and The idea is that it completely isolated from the rest They only communicate with your pipeline handler. They do not access the hardware directly ever So that's an architecture decision the pipeline handler is part of the live camera core It has to be fully open source while the IP modules don't they can be out of tree. They can be closed source because vendors often don't want to release The all the knowledge they have put in creating those algorithms. We have Implementations of those in lip camera that are fully open source That's what we work on but we want to also offer the vendor the ability to have To provide close-ups implementations not that we like that, but it's exactly the same situation With the GPUs a vendor who wants to upstream DRM drivers for GPU in the kernel Losing my microphone There we go Has to provide an open source implementation of open GL a volcano in Mesa But they are free to also provide a close-up implementation in parallel if they want to so that's that's the kind of architecture we have here So IP and modules if they close on their sandbox they can only communicate with the pipeline handler there's a whole IPC system to handle that and Now that you're very familiar with the camera five minutes We're gonna dive into The bulk of the talk today. How do you add support for your new platform? So a brief very briefly because if you want to support for a new platform have bad news for you You're gonna have to write code Hopefully there are a few people in this room who don't mind that So, how does it look like where do you find a code or what does the source tree look look like? Navigating lip camera It's at the top level you have just a few a few directories. I'm not gonna enumerate all that this is more for Russians So you can check the slides offline But we do have an include directory with lots of different things there and a source directory That contains different components. I think that part is a bit more interesting We have the core and lip camera in the lip camera directory. We have Python bindings. We have Adapts what we call adaptation layers that adapt Towards existing frameworks. So we have a G shimmer element implementation. For instance we have vfold to compatibility layer that emulates video for Linux for Video for Linux native applications that you may not be able to change or recompile They may be close source and you want to still use leap camera. I'm sorry those kind of things We have an Android camera highly implementation. So for those who who don't know that If you take clip camera if it supports a platform, not only are you gonna be able to run that on? native Linux systems with applications that use it can directly But you'll be able to use plug-and-play application that you G shimmer pipeline And also work on any Android system. So you get that for free basically Really Okay, quickly going down and back to where we were That's all In this Lip camera directory the core of lip camera. That's we're gonna look at now We have a bunch of pipeline handlers for different supported platforms today There's actually one more. This is a bit of an old slide And we expanding Coverage for these I was really hoping that they could announce today on stage support for at least one brand new platform Then everybody would be excited about but that's as it's not the case So see you at the next conference where hopefully it's gonna be it's gonna be possible No spoilers And Same thing in the IPA directory with the IPA modules. We have support for different platforms there So that's what you're going to look like Let's start simple and fairly quickly This is the blog diagram of a camera pipeline on an nxp mx8m plus for a piece of hardware called the ISI So it's not an ISP. It's fairly simple You have multiple inputs from different camera sensor You have a crossbar switch in the front and you have a very short processing part line here Bit easier to read you have your image sensors outside of the chip obviously CSR to receiver and the processing pipeline just has a scaler and a color space converter more or less So can't do much but there's lots of platforms that are as simple as that and to support them easily Looking at first use case single camera. We have Added an implementation of the camera of what we call the simple pipeline handler So the simple pipeline handler is probably the most complex piece of code we have in the camera with greater picking names remember broccoli pie It's not simple for its implementation, but because it supports simple pipelines And the reason why it's complex. It's because it will do that completely automatically it will enumerate using standard kernel API is what you have in your device and will try to figure out a linear pipeline between a camera sensor and V fall to capture video node, and if it finds that pipeline, it's okay. That's a camera So that's that's all implementing internally if you're hardware is as simple as that if it doesn't contain any block that requires Device specific knowledge for the configuration Then you can use that this documentation in the source code if you want to look at it with a brief explanation of the architecture and what it does But to use it So we're looking here at the media graph of the device And What we're looking at when we want to support a single and single camera is one camera sensor connecting to see center receiver You have your crossbar switch that will be passed through and then just the short processing pipeline. So for that That's the source code of the simple pipeline handler if you want to add support for your platform It's a one line of code change Told you you had to write code. Hopefully that's gonna be manageable But you add a name that the driver exposes to use a space and that's it Cam is a common line test tool that we have it can list available cameras in the system It finds one and then from the first camera. I'm asking you to capture five frames and it captures five frames. It's that simple Actually, maybe not because it prints a warning saying oh, there's an unsupported video for Linux pixel format That I don't know about while this has been fixed since then But if your device brings a new pixel format that leap camera doesn't know about well, you may have to Add support for that. So in our list of supported formats you add you new formats at the device you point you Add that to a corresponding C++ file with description of what the format is it has a name has 24 bits But pixel here the pixel later in memory And with that don't I Skip to the next slide Okay, go back behind a keyboard this doesn't work for real really well you get rid of the warning and And you get full support for you new pixel format if you look at dual cameras Same thing the simple pipeline handler will see you have two image sensors There are two video notes at the end it will try to figure out pipelines in between And do that do that automatically So same thing it lists two cameras At the at the top I can see there was just one camera before now we have two of them capturing five frames from one camera We're trying to capture frames from the second one again, this is fixed Since I presented this last time But we didn't have support for correctly enumerating internal roots inside the crossbar switch So we had a media control API that lets you see what's happening in your device But nothing in video for Linux to handle a crossbar switch correctly So what we have done is that we have decided to extend a kind of API Very important point I said that lip camera is not about Canon development But if we have issues with the kernel while someone has to fix them So we drive the media control and v-fold to API development, but this is not a Hostile takeover the kernel will not replacing video for Linux. We are user space framework So we added support in silent camera and in the kernel for the new v-fold to subdev routing API that allows Showing what's happening inside All the all those blocks in the pipeline We added support for that in in helpers that we have in the camera and updated a simple pipeline handler to use that API and then With relatively simple changes there. I'll let you judge a few patches You can capture frames from your second camera so This is not even something you will probably have to do because now that the API is supported It is you know stream your driver would use that that works out of the box But that's the kind of experience you can expect if you want to do something that's still remain simple But it's not exactly supported today Now what's more interesting is if you have an actually complicated platform that requires a device specific pipeline handler First of all we have documentation we have a pipeline handler writers guide First step you read that It's in the leap camera source tree compare to html It's a nice document that uses an actual use case to guide you through adding support for a new platform And I'm going to try to explain that today in five easy steps So first thing I'm going to write a skeleton for pipeline handler and wire a dot to the build system Leap camera uses meson for the build system Hopefully you all like that. It's not make-based We found it much much easier to use another autoconf, but I know this is sometimes a bit of a controversy So you add to the meson options file new new pipeline handler name there you create Inside the pipeline directory a sub directory for your new pipeline handler with a meson build file We're gonna have a single source file to start with And we creating our skeleton here I think this is probably the most important piece of information and trying to convey today We have the cameras in C++ by the way, we have a class called pipeline handler We're gonna inherit from that and implement a set of operations that all pipeline handlers have to implement and Then not that many of them. That's that just all the operations that we need so we have matching configuration handling But for handling and then well, no, actually they're probably oh no, sorry stop is underneath here So starting stopping And being able to capture frame and then we have a nice macro that's very handy at the bottom to register the pipeline handler so it's really about doing this and Filling in the blanks So matching first matching if you familiar with the kernel Driver model matching elite camera is more or less the equivalent of probe in the kernel so it's the idea is to figure out which pipeline handler we can use for given piece of hardware and Instantiate that pipeline handler and create cameras More manager will media grab than the big one with all the blocks and the links otherwise will spend a whole day on that But this is a piece of hardware with the camera sensor at the top. If you don't have any camera sensor I'm not sure why in this room There's a CSR to receiver because most cameras that we deal with today us use CSI to you have an ISP Despite the fire. It's a single block. They can be Tens of dozens of processing blocks inside can be fairly complex ISP But the interface now as the outside world is that it consumes parameters that are put in memory buffers Think of that if you know about GPUs about a command stream that you sent to GPU It will output statistics that calculates on the frames And at the output of resizer you also had of the ISP. Sorry, you have two resizer blocks. You can capture two Streams with frames of different sizes at the output. That's that's the hardware we're looking at I've put in a bottom right as a reminder So the match function of the pipeline handler Will try to acquire a media controller device Among all the ones that in the kernel that match a set of criteria in this specific case the name of the driver and List of entities and expects to find so that's entirely platform specific, of course If it doesn't find anything it returns false. That's it hasn't found hardware that you can handle and leave camera will go to the next My plan handler, but if you find something there Well Then it's up to you. That's platform specific code to then start using the kernel devices that That support you ISP and all the hardware you have in your device So, you know class we have a set of objects that we are there pointer to media device that we get we fall to sub-desk before to video nodes The important part here is that leap camera provides an extensive set of helpers that you can use to write your pipeline handlers You don't have to deal with the complexities of the vehicle to API directly at least not too much It's encapsulating in classes that are still vfo to specific But that makes it much easier to deal with the API than calling the IOC TLS directly So using those You know much function not that we have found a media device. It's okay. We know we have two resizes and connected video nodes We're gonna loop over two instances Looking at the name of those entities trying to acquire those entities creating vfo to sub dev instances vfo to video node instances for the capture nodes at the output of pipeline we store that internally in in a class member And so we build basically all the things that we need and we do error checking along the way There's a missing piece. Well, maybe there's something wrong at the kernel level or it's not the hardware We can deal with so we bail out But this is really the equivalent of pro function trying to discover what we have and initialize everything Um It's the match function So we looked at the resizer going to do the same for you ISP for your CSR to receiver You can get entities by name as we see here But sometimes for other components, especially the camera sensor for instance You may not know the name of the entity because you could have different camera sensors connected to your system So instead of that with the helpers that we have you can follow the media graph and say okay I don't know the name of this blog, but I know it's connected to input zero of my SP So what's that so you follow the links and get the PCs that you need Um So we're discovering the pipeline same thing we're looking for a camera sensor An entity that exposes the function camera sensor if we don't find one. Well, we bail out again and Once we get to the camera sensor Imagine that you have a single ISP but multiple sensors, okay, and you Wouldn't be able to capture from all of them at the same time, but you can Pick one capture from that camera sensor stop the video stream capture from another one So you have multiple cameras in the system, but they share pieces of hardware This is why we introduce Another another class because your pipeline handler can in this case create multiple camera instances So we saw before how we were storing data in the pipeline handler class But if you have data that you have to store per camera Then you create a subclass of an internal Class in camera called camera private where you can store your per camera data in this case. We just have A pointer to the camera sensor and the fact that we can capture two streams. So that's That's how you split your data between the camera side, which is per camera and the pipeline handler side, which is everything If you have a single camera that can be supported in the pipeline handler The boundary between those two doesn't make too much sense anymore You can put things on one side now that doesn't matter too much But the logical separation is pipeline handler side share resources camera side things that are per camera So We create an instance of our camera data per camera We store there the pointer to our camera sensor And in the initialization function of this camera data We actually Call the init function of the sensor camera sensor again help a class in leap camera to help you deal With the complexity of vfoil 2 for camera sensors all encapsulated there And getting back to the match function while we're getting to the end We find all the pieces we need and we decide to open the devices and that opens the The device in slash devs so we can start using them for the sub devs for the vfoil 2 video note And it's then time to register the camera or multiple cameras we have found in the system Um, and so once you get to the register camera call at the very end of the match function Leap camera will know there are one multiple camera supported by this and application will be able to start using them First test once you're there There's not enough code obviously to use a camera But at least you can enumerate them and you say that there's one camera there in the system The id is a bit well, not very nice for humans, but this is a Textual id whose goal is to be stable in a system across reboot So you know that this string will be the same for the same camera Even if you reboot your system as long as obviously you don't change the hardware Or possibly the firmware if you do a firmware update in system depending on what a firmware exposes the name might change But otherwise that's a stable id not meant to be consumed by humans Um Then there was no nice name for the camera by the way as you saw just just this string That's that's not very nice. So no real human readable name if we add properties In our camera data class just taking them from the sensor helper. So I'm asking, okay I want the properties of the sensor. We're gonna store them in the camera data Just adding that one line of code will help and we'll show you. Oh, actually It's an internal front camera in the system. So that's already much nicer for the for the user So that's it for the matching then we get to the configuration part Configuration is about generating a configuration It's about validating what the user wants to do and then applying it to the device So that means that you have to fill a generate configuration Uh Function in the in the pipeline handler class. It's one of the api PCs. That's that I need Where we just create a new instance of a camera configuration and then we initialize that with default values Simplified code There's a single stream here with a fixed resolution Normally you would look at the stream roles that are given by the application application may want to say I want a stream for viewfinder I'm going to discuss on the screen and the stream to capture still images in a larger resolution And so you would take that into account to initialize and and create your configuration But this function does that and returns a configuration to the application Then that configuration object is Again a camera class that you inherit from you can you have to implement a validate operation on that You can store additional data in your class and the validate operation if you're familiar a bit with video filling except replicates slightly the The model we have there where if you Want to configure your device you're gonna ask the kernel to set some parameters and the kernel will come back to you saying Not saying no, I can't do that because then you know in the wiser You can't you you don't know what it can do instead But it will give you back something that may have been adjusted saying you wanted this exact resolution Well, I can't do that exactly. I'm going to give you something that's closed but not the same So the validate function here is the same It's going to potentially adjust what the application requests to something that the camera can produce So in the validate function You would typically iterate over in your camera configuration over all the stream configuration So I said lip camera can capture multiple streams for one camera So one camera sensor at the same time generating Two streams at potentially different resolutions different formats So you're gonna have a look at the stream configuration and see if what the user requested is possible Is it beyond the limits of the hardware? Is it that actually each of the stream would be valid independently? But together you exceed the bad the bandwidth you have in your system. So that's that's where you do that adjustment And then once the configuration has been validated The application asks lip camera to configure the camera with configure function And that's where we're gonna take the configuration that is given And then look at all the devices you have in the pipeline all the things we have discovered at much time All the thing we have opened and called v4l to help us to then Enable these able links in the graph Or take the configuration of a stream And then set the sensor format set the format on all the the blocks in the pipeline So because this pipeline is entirely device specific that's device specific code That's up to you to configure your device So we're done with the configuration. We're nearly ready to use the camera We need buffers to capture frames into Lip camera is based on a model where mostly you import buffer the api I assume that the application has buffers and imports them in lip camera telling here Here are buffers in which I want to capture frames That's the case for instance if you want to display something you may allocate buffers from your display device from your gpu So you have those buffers and you can use them with the camera But in some applications, especially in the simple case you may not have that So there's a helper. Well, it can reckon also provide you buffers So every pipeline handler have to implement one function called export frame buffers that will allocate buffers in the application Ask for that and the way it's done really is using the vfault to help us a vfault to api to allocate the buffers in the kernel And then exporting them to application So that's usually a very simple piece of code, but it's to make the laugh of applications a bit easier In a known future when we will have a unifying memory located on the next systems for media devices Something similar to android iron for instance based on dmab of heaps and a use-based library to handle that and handle all the constraints This should go away because the job of lip camera is not to allocate buffers But as long as we don't have that this is something we have to keep And then we're ready to implement the capture side starting the device starting the camera Getting the capture request from applications And well stopping the device when you don't So how is that done? Well, we have a start function again. This is about starting the hardware. So Meta graph specific to your device Code specific to your device as well. And in this case, we're going to loop over the path Has the configuration enabled each of the two output path and for each part that is enabled. Well, let's call the stream on function To enable the to start the v4 into device Queuing a request we get a function call with what we call in lip camera capture request from an application So it's basically bundling a buffer or multiple buffers and controls together And so we have to queue those buffers to the capture device Which is again the v4 to device at the end of the pipeline So that's also usually a fairly simple implementation the contour contour part of starting is stopping Nothing very complicated there either. It's fairly Small amount of code and then we're ready final test We're gonna capture frame or camera with our camera. We start the camera application It prints this and then you wait a bit And a bit more and well, you make it a cup of tea if you want to but nothing happens Why well, we've started the device, but There's no code so far that actually connects evens we may get from the kernel To the pipeline handler. We don't know when the frame is ready. We're not signaling tattoo applications So that's something we have to add to add at much time for all the V fall to video nodes. Well, we connect the buffer ready signal. That's emitted By the helper classes in the camera to a handler. That's device specific And so in our pipeline handler, we create a handler that we call buffer ready And we're completing the buffer. We're completing the capture request from the user And this is what will signal to the user that the frame has been captured And so with that code in place now, it's much better. We can actually capture frames So that's a pipeline handler sign in practice is going to be Probably slightly more complex than that. It's going to be a bit more error handling They were placeholders validating a configuration on generating one configuring the device a bit more line of code because you have more blocks, but More or less. That's that's what we're looking at Then how do we actually control the isp because that was the plumbing. That's why it's called pipeline handler You know, it's putting all the blocks together and handling the pipeline But we haven't seen and how to calculate all the parameters for that isp and how to handle all the control loop So I was talking about api modules. I said we had Nice guide to tell you how to write pipeline handlers. Well, we have a nice guide to tell you how to write api modules as well Isn't it great? Same directory compile to html and you should start by reading that Four steps only right compared to five makes it sound Much easier to write support for an isp It may not be exactly the same, but we start with what you call the ip interface So I said ip module loaded dynamically at runtime communicates with a pipeline handler Well, if you want to communicate between the two and you have to communicate device specific information those pieces of code are device specific while you need a communication protocol That's going to be device specific as well. So that's what we call the ip interface We use a system in the camera an ideal that's From chroma is actually Called mojo. So it's just a c like syntax to to express an interface And we have or here ip interface with a set of functions in it starts top configure map and map buffers q-request So this is entirely device specific There are a few functions in the interface that are required the init functions start stop I think configure map and map are required But the rest is something that is up to you depending on what you need to communicate Well, you create your functions and indicate what type of data need to be passed between the two Important point all the functions by default are synchronous There are asynchronous functions as well at runtime. So once the camera is started everything becomes asynchronous. You tell the ip okay Statistics already here the statistics, but you don't block and the ip at some point will Compute is p parameters based on that And that's the other side of the ip interface is the even interface where the api will say asynchronous as well Okay, the is p parameters already the sensor parameters already ip is isolated in a separate thread So they don't block the main pipeline handler They also isolated potentially in a separate process if there are closed source So communication can be a bit costly Do not have try to minimize the amount of events If you always generate on the api module side the is p parameters and the sensor parameters for the next frame together Well, don't have two calls Two to evens parameters ready and sensor controls available, but don't them together and pass the data together so we have an interface and Now we need to look at how to handle that in the pipeline handler Wiring up the capture of the statistics from the isp Communicating with the api configuring the isp and This is going to be because we're running out of time unfortunately We can't do that all in 40 minutes. I'm sorry to disappoint you the topic for the next talk about lip camera But Obviously if you really can't wait you can have spoilers. So I'm available the whole lip camera team is available to guide you through this I Will try probably for embedded recipes in September to have a talk about the actual api module and how to implement the algorithms and the auto exposure And the auto white balance and all those pieces But this was really the plumbing the pipeline handler side. How are we going to communicate with our api module? And we'll then see how all that is done So that was lip camera in a in a nutshell Contact information available and now we may have A couple of minutes for questions or half a minute or something like that Two minutes for questions and well it can come to me afterwards. Anyway, if you're not very hungry Any question? Yes Easy answer So thanks for the presentation. I have a question regarding the isp What about firmware like that's loaded by kernel or is it like the task also of the lip camera? So the isp firmware is not something that the camera will handle what even from a kernel point of view What we consider to be the isp interface like the device to kernel is a combination of hardware and firmware So if you have someone running on the isp the kernel driver may need to load it with a firmware api So that's a kernel task But then the the api that the firmware exposes Towards the host is what the kernel will use what lip camera will use so we don't dive into the firmware So basically it's the same as gpu Yes, exactly. Okay. Thanks another question in the back um as lip camera already capable of generating command stream for the isp Yes, well it depends a bit what you mean by coming through the platform that we support today They bundle all the isp buffers all the isp parameters in a memory buffer It's expressed because that's how those isp work as c structures Implicated c structures with hundreds of parameters So you can consider that to be a common stream But if you have an isp where instead of having a set of parameters You you have things that look more like commands. It's exactly the same thing from a lip camera point of view It's about putting those things in a buffer that a job of the api module Platform specific. So that's fully supported. Yes So thank you very much. We are out of time. Maybe find you in the hallway and ask some more questions Absolutely. Thank you very much