 So, thank you for coming today. Can everybody hear me? Microphone's on? Okay. I'm here today to talk to you about a brand new project that is called deep camera. So, I'm going to switch from the website to the slides. Yeah. You might have heard of this before if you've talked to me or to a few people who have been involved in the past few days. But we are unveiling today something that we hope is going to be the future of cameras you brought in Linux. There's a brand new website that has just been launched live. You can go to leapcamera.org and see that. Please note during my talk, I would prefer if you could listen instead of looking at the computers. So, I'll start by quickly explaining why and the reasons behind this project and then we'll dive into the subject. I won't repeat the talk that was given by Morrow, who I believe is in the room somewhere. Yesterday, when he explained the problems we have with camera support in Linux and the solutions that we're trying to develop, so I'll briefly summarize the problem and then concentrate on the solutions. The problem we have, the why question is, I believe you all know in this room that we have an API in the kernel called video for Linux v52 that has been developed for a long time. We've had camera support in Linux for more than a decade. I was close to say two decades but that might be just a bit too long. And we've traditionally supported PC-based hardware that were more or less following the given model, given hardware architecture, and so we modeled the user's base API around that. And at some point, we also started supporting embedded systems. They were at the time pretty simple. So, you more or less had a camera sensor that was connected to SoC. There wasn't much going on the SoC side. You had an interface that was a camera sensor. You had possibly a scanner for my conversion, not much, and everything was exposed through the V-File 2 API. So, it was just as simple as that. After some time, hardware became more complex. That's a picture from 2009-2010. So, when hardware became more complex, this is much simpler than what we can find in some systems today. But we found that we had to depart from the traditional model and instead of exposing a camera as a single device node to user space with a single API to control that, we had to let user space dive in the device and expose the internal topology and let applications on user space at least control have more finite great control on the device and on older components that were composing this device. So, at that point, we introduced new APIs in the kernel. We introduced an API called the media controller. We introduced extensions to the V-File 2 API to support that. And it became more complex for user space as well because you have more complexity at the hardware level. You have more complexity in the kernel. You have a more complex API exposed to user space. You can't do any magic. It became more complex for applications too. So, traditionally, applications would just interact with a simple device through a single V-File 2 device node. With a large set of IOCTLs, kind of a large set of operations, I would like to say not too complex, but probably biased because I've been working in this field for too long. But, please, in my opinion, it's a situation that's manageable. When going to a more complex device, you had to interact with several device nodes using different APIs, and they were all interacting together. So, from a user space point of view, the additional complexity was required. You have a more complex device. You have more features. You want to control those features. You can implement way more use cases that you could before, so that requires more complex code in user space. The problem is that we push the complexity to user space, but kind of then ignore user space completely until people, well, we have this brand new kernel API. That's all great. Now you can use them. You can do marvelous things with your cameras. But how to do that? Well, it's more or less extensive documentation. Now, it's your problem, not ours. And that didn't really fly. We had devices that were developed at the time. I'm thinking in particular about Nokia phones. That's where these new APIs all started. Where applications running in user space were really tailored to the system. That was more or less okay. You could just hard code use cases in user space. You could hard code a lot of assumptions that you can have about your system because you know what your user space is going to run on. But that just didn't scale to regular video for Linux applications. So, the plan back then was to introduce a new layer and have everything go through LibV file. LibV file is a user space library. It mimics the V file to kernel API and exposes it to applications. So, from the application point of view, it can be either completely transparent and trapping the calls to LibC. Or you can explicitly link against the library and you replace your LibC calls like open IOC TLM map with V file to open V file to IOC TLV file to M map. And we sketched this architecture where LibV file would get support for plugins and we would have device specific plugins implemented by the device vendors and it would all be part of the LibV file code base and everything would be great and that never happened. Partly due to lack of time, at some point you all know that Nokia decided to move away from Linux so the camera team there who had lots of great projects and I'm not saying that we would have achieved what we wanted to do but we at least have a good excuse not to have done it. So, that's a problem we have today. This never happened and if it had happened, it wouldn't have been enough because what we have there is a library that will handle the additional complexity and hide that from video for Linux application which is great. You could use your regular V file to applications unmodified and still able to interface you with your complex cameras but if the hardware becomes more complex and if the regular V file to API is not enough to expose all those new features to use the space, well if we use a library that's going to expose exactly the same API, can't do any magic. Applications can expect the old features to work but nothing new. So, Leap Camera is a new project that we decided to launch. As it name implies, it's a library and I assume you can guess what it deals with. This is in very early stages of development. We have sketch and architecture. It's kind of high level. We have a list of features, things we want to do. Hopefully part of that are things that we will achieve. Part of that are certainly things that will change in the future because we will realize that not everything will work as expected and we also realize that the world is always much more complex than we would like it to be. But we do have an architecture. So, this is not just a camera library but it's a full camera stack in user space. That means that at the very top you're running applications. Those applications, the outside of the scope of the stack obviously, outside of the scope of the library, sorry. Under that what's in green and red in the diagram over there is part of the Leap Camera project. So, we have divided that in two parts. We have at the bottom in red the Leap Camera framework. So, that's the main part of Leap Camera. We have a library that is sitting on top of kernel devices, kernel drivers that expose an API to user space. And we also have optional language binding. So, the library will be implemented in C++. It's going to expose a C++ API. We do want compatibility with everything in user space. So, that means that we want to stay compatible with plain C applications as well. We're going to have a plain C API. We will likely have bindings for different languages as we go along. There's no fixed plan or deadlines on when that's going to happen. But if anyone is interested in particle language, of course I'm sure that people will contribute and submit patches. On top of that we have another layer in green called the Leap Camera adaptation layer. So, the idea there is to translate between this brand new API that's no application today supports to existing camera APIs in Linux, in the Linux world in general. So, one of them, of course, is the vfoil to API. We want to remain compatible. We want to implement compatibility with regular video for Linux applications. This will be done on a best effort basis because, as I mentioned, we have much more complex devices today. This means that using an old API that is based on a different hardware model will certainly run into limitations. So, we will not implement every feature possible. But the goal is to be able to use the camera in the same way you would use a webcam in a computer today. So, you have to be able to capture video stream in different resolutions, different formats, and all that should be done through the vfoil to API. So, that's one of the components. Then we have another component on the other side, which is an Android Camera Hull implementation. Camera support in Android is based on a really large component called the Camera Hull hardware abstraction layer that is developed by Divas vendors. The camera framework of Android sits on top of that. I say very large because it's, at the top of the hull, the API that's exposed is very high level and also very flexible. It allows you to do many things with the camera device that you have. It's obviously tailored for use cases that are more in the phone and tablet areas. But it does allow you to use all the features pretty much of cameras that you have today. Vendors ratio implement that there's a lot of code application because there's a large part of the hull that is not really vendor specific. But vendors definitely customize their hull implementation and they do implement in there the 3A and lots of image enhancements algorithms and we'll get back to that. So, we would like to have a standard Android Camera Hull implementation on top of the camera. When I say standard, it means that that would be a single implementation that would not contain any vendor specific code. Vendor specific code, as we'll see that still will be needed, will be all inside the main camera component, which means that tradition of VFull 2 applications, Android applications using natively the leap camera API and I'm thinking today of the Chrome OS camera stack where we plan to support Chrome OS as well and we've had technical discussions with the Chrome OS camera team who was very supportive of this project. They hosted a meeting in June in the Google office in Japan and they want to contribute to the project as well. Other applications based on G-streamer for instance, we want to target them as well, we want to have a G-streamer element that will be based on the camera. So, pretty much any application running on a Linux based system should be able to benefit from this project and get as many features as possible out of the camera without having a fragmented world as we have today where device vendors or SOC vendors will provide a full close source implementation for Android and leave you completely unsupported if you want to use a traditional Linux distribution or any other system. Now, let's dive into a bit more detail on what we want to do. I will start with presenting the features that we want to implement, the functionality that we want to expose and then I have a bit into how that will be implemented. Sorry, that's the wrong button. So, the first concept we have is the concept of a camera device. You should think of a camera device as what you would expect naturally as a developer but as an end user. If you ask anyone who's not, who has no exposure to software development, he's a phone, what's a camera in the phone? Well, they will give you a pretty straightforward answer. Well, I have two cameras in my phone, I have front camera, I have a back camera. They don't care whether how many sensors are used to implement that, they don't care if they're connected to the same ISP inside the SOC or what's in there. It's really the feature that exposed towards the outside world. So, that's the core concept we want to have in the library. We're going to expose camera devices and we're going to expose an API that will allow you to handle them. We will implement enumeration of those camera devices. It will be support to enumerate all camera devices available in the system with few additional features as well. We'll get to see that. The devices at the kernel level, the device nodes, all the underlying concepts, that will be hidden from the upper layer. So, new camera applications, space and leap camera, will not have to deal with slash dev slash video seven. They will have to deal with the concept of a camera device as a couple from that. We want to expose capabilities of devices. I mentioned that in today's devices the cameras are getting more and more complex. They have all kind of different capabilities that they support. The industry is moving in an direction where pretty much any vendor will want to implement what end users expect today, but they still try to differentiate. Features will vary from model to model from SOC vendor to SOC vendor from device to device. We want to be able to expose that. Because we have lots of features available in today's cameras, it also means that an API strictly based on features would be a bit impractical to use. From an application point of view, if you want to develop an application and say, well, I need to capture a video stream for video conferencing purpose, single stream that I want to display on the screen and I'm going to handle the encoding and streaming over the network, that's a high level functionality. If you need to go through a list of 300 features or capabilities and verify that the device implements every single one you need, or possibly that the device implements a feature that you would need, or another one that you could use as a workaround, that's becoming more complicated. So we will also have a higher view of features and we will have a concept of profiles where you could say, I want a camera device that implements a video conferencing profile. I want a camera device that is able to support the point and shoot camera profile. The next concept we will have is the concept of video streams. We want to be able to support streaming multiple streams of images from a single camera. The reason for that is that you may want to use your camera for several purposes at the same time. I mentioned video conferencing. You might want to have a different resolution on screen than the resolution you encode in stream over the network. You may want to display the live stream on the device screen and at the same time record that, record a video, and there could be a different resolution as well. You may want to take capture stream images in again different resolutions as well. So you need to be able to capture several streams in parallel. That's a feature that's available in many cameras today with different limitations, limitations to the number of streams to the resolutions it will support, but this will be built in into the live camera API. We want for given stream and for all streams, and that's where it becomes more complex, to implement what's called per frame control. So traditionally, a camera will expose lots of controls that will influence the images that are being acquired. You can think about exposure time, for instance, you're going to have lighter or darker images and you certainly want to adjust that based on the lighting conditions. You can think about lots of image enhancement features, video stabilization that you can implement. You want to implement probably a flash if you want to take still image captures in dark conditions. So a lot of things that we want to do. And traditionally in the VFile 2 API, we had support for variety of controls, but with one big limitation is that you are capturing a video stream with one frame after another and when setting a particular control, say you want to increase exposure time, you never knew when that would take effect. So all the frames that are captured from the device are completely decoupled from the control of the capture and of the image processing. If you want to implement something that will allow you to have good quality out of camera, if you want to make sure that you always have the right brightness, right exposure time, if you want to implement autofocus, auto white balance, you will need to deal with all those controls, but you need to do that in a way that is really tied to the video stream. If you want to take a still image capture with flash, well, you don't want to activate a flash and receive an image that you captured that is either before or after the flash fires. So we'll support the concept of per frame controls where you can modify all those parameters for every frame that is being captured and in a way that is synchronized with the video stream. I mentioned 3A, which means three automatic algorithms, auto exposure, auto focus, auto white balance, and other image enhancements we can think about, I mentioned video stabilization for instance, but vendors can implement way more than that. The way this usually works is that you have a camera sensor that is connected to an ISP inside USOC, ISP is an image signal processor. That will capture images to memory, and the ISP is also able in most cases to generate statistics. It's going to generate Instagram, it's going to generate a whole range of statistics that tail it to camera usage. And then the component in blue marked as 3A is more or less a control loop. So if you remove that, you capture a stream of images and you can still modify the controls that will influence the images you capture, all the processing that is being done. But if you want to implement auto exposure, you will need to take an image, check whether it's overexposed or underexposed by how much, and then adjust the exposure time for the next frame. So you have a control loop there. You can have a very simple implementation in a hundred lines of code that's going to give you barely usable results. Or you can have an implementation that's a result of two or three years of research and development from device vendors where they really, really try to optimize the image quality. That's the main differentiating factor, at least from the point of view of SOC vendors today, and they do not want to release that in an open source license. Does they know how? At least is what they believe. I'm pretty sure they believe that nobody in the open source community could come up with an open source implementation that would measure the level of quality they have, because they're the best. It's not like the way that the open source community could never come up with GPU drivers, completely impossible. We want to create a framework in an ecosystem where open source re-implementations will be possible. We also want that framework to allow close source implementations from vendors, because that's what's available today. That's what people developing products want to use. That's what's used in the close source camera-high implementation that every SOC vendors provide, and we want to benefit from that. One of the last features that we want to implement in support is, I mentioned the adaptation layer. I mentioned the fact that we want to be able to support existing applications using existing APIs. So we'll have a vFile2 adaptation component. We'll have an Android camera-high implementation. I already briefly talked about that. And we plan to implement elements, components, adaptation components for any framework that would be of interest to the project. So initially we thought about vFile2, camera-high find rate, and G-streamer. If anyone in the future wants to support something else and contribute code, that certainly will be welcome. The framework will allow that. If we have module architecture, we can implement new components. Then, that was the high-level features that will be available from this. Next, if you look not from the outside point of view, but if we dive inside Leap Camera itself, this is the architecture that we have drafted today. I already mentioned the concept of camera device. That's going to be the core of the library. We'll have a camera device manager that will be able to enumerate devices. A camera device will be the main object you will interact with. A large part of the camera device as large as possible will be device agnostic. So we want to have as much code as possible that we can reuse and that can be shared between vendors. Today, in the Android world, vendors implement a completely closed source camera hull. All of them have to go through buffer management, memory allocation. They all have to go through a different set of tasks in which there's no added value. It's just a waste of resources. So we want to have that in a code base that's open source and that is also shared between different vendors. We will have vendor specific components in readiness diagram. There's two of them. We'll come back to that. We have two components called the pipeline handler and the image processing algorithms which we have already talked about. The camera device manager will allow an application to get a list of camera devices presented in the system. That's kind of straightforward. The plan is also to support hot plugging of cameras and in the other direction hot unplugging of cameras. So applications will be notified when new device appears in the system, will be notified when the device disappears from the system, hopefully without crashing. But that's more or less what's in the camera device manager. That's not going to be very complex. Then, and that's where it gets a little more interesting. I mentioned we have two pieces that are vendor specific. The pipeline handler is the first one. So what's a pipeline handler? We do have a camera sensor on the top left. Hopefully everybody can recognize the very neat picture. That's connected to receiver, in this case, for instance a CSI receiver. You can have any kind of interface. And it's very common to have the raw images being written to memory directly. We have a bigger ISP inside the system that has multiple components internally with various levels of complexity. That kind of perform various image processing tasks. And that ISP will generate one, multiple streams of images. As you mentioned before, we'll generate statistics as well. And all that will be written to memory. So to get this to work, we set up the first device with the camera sensor and the camera interface, and capture to memory. You need a buffer queue there. You need a pool of buffers. You need to allocate your memory. You need to capture. And all the images that are captured there, they need to be passed to the ISP device that will operate in a memory to memory fashion. So you can see that just to use this, even if you don't have a 3A algorithm implemented, a control loop, even if you don't use any kind of feature, just to be able to capture images that will be in anything other than raw format. So you think about converting the raw format to YUV. You need to write to memory, and you need to pass those buffers to a second device. You have two devices involved. So I mentioned in the beginning that we created kernel APIs that added lots of flexibility and exposed really fine-grained details. But that means that an application today will need to deal with intermediate buffers, while there's really no need to push that down as your application. So the pipeline handler is the software component that will handle that. That will handle the multiple buffer queues that will handle all the scheduling of the buffers. The configuration of the pipeline in the first place. If you want to configure your device to route the signals in different ways, I want to capture one, two, three streams. I need to configure my pipelines for that, the harder pipelines. And I need to allocate buffers. I need to pass buffers around. So all that will be implemented in the pipeline handler. We have difference. This is an example of an architecture that can be implemented by Divas Venda. But you could certainly have an ISP that has an integrated CSI receiver and doesn't have a buffer pool, doesn't have to go through memory, and have just a single harder pipeline. You can have ISPs that are more complicated than that. You can have ISPs that support either the direct pipeline or going through memory, depending on use case you want to have. So that's to some extent vendor specific. But the concept of having to manage buffer pools, of having to configure pipelines, of having to schedule buffers, all that is generic code. It's code that can be shared between vendors as well. We want to implement, as I mentioned, the 3A algorithms, all the image processing algorithms. So this is something I briefly explained already. I already mentioned that the blue component 3A is something that Divas Vendors want to keep close on for now. So the way this will work in lib camera is that you have your camera device internally. And it will be a 3A API to communicate between Steader code in green and a vendor specific component. That vendor specific component will talk this standard as API that will define. And the camera device the generic code will communicate with the kernel drivers. I mentioned this can be closed source. And that's a big problem. That's a big problem because well, first of all we want open source implementations, but that aside there are security issues. You're running entrusted vendor code. You're running entrusted code that's provided by SSC vendors who have a long history of making all their hardware perfect. Mail down, specter. You don't want to trust that. Even if the vendor is not evil and doesn't try to do something behind your back, it can be bugs. You don't want to crash your system or worse having your system hacked because of a faulty vendor 3A closed source implementation. So what we want to do there is to have the ability to isolate that component and to make it run in a sandbox environment. The way that's going to be implemented is that we will have the same 3A API that will be un-martialed and go over IPC. So the 3A component will be able to run completely and modify if you look at two slides before this image processing component. And here on the left-hand side, that's the same API, it goes over IPC and on the right-hand side it's exactly the same. So from a vendor point of view, running that inside the main process, running that in a separate process that will give the exact same API. What it means if you separate that in another process is that first of all it will not be able to access the memory space of the main camera process but then we can sandbox that, we can limit the system calls that are available. We can prevent the vendors from messing up with the system intentionally or if the code crashes in various ways to have malicious code messing up with the system. More than that, we can prevent the sandbox component over here from accessing the kernel drivers directly. We want an open source re-implementation. We want vendors, SOC vendors to play fair. We want them to publish open source kernel drivers that expose every control that the device has. We want them to document those controls. That's a requirement we have in video for Linux in the kernel. If a device vendor wants to upstream a driver and if they have a custom ALCTL there that just passes a binary blob of data to the hardware without documenting what's in there, we will not accept that. We require them to document the API. But it's a bit difficult to ensure that what it documents is actually to what's implemented. They could say we have a buffer of controls, of values, and here's a list of all the controls that you can use and what you do. But who guarantees that there's no other control there that is not documented? Who guarantees that a closer implementation doesn't just go directly to the kernel APIs without saying anything. Isolating and sandboxing the component will ensure that all the controls will have to go through APIs we control to code that we control and we will be able to prevent vendors from doing that behind the back. So they will still be able to ship the close source implementation. They don't have to disclose the know-how. If they think that the all-day image enhancement algorithms are the best in the world, that's fine. But they have to play fair. They have to make sure that the kernel drivers will be usable by the community by anyone who wants to implement 3A. So secreting and playing fair with the community. We will have lots of helpers and superclasses. I mentioned before that you want to locate buffers. You want to do scheduling there, running the pipeline, moving buffers around, interacting with kernel APIs. We will require the kernel drivers to use the standard Vfile 2 and media controller APIs because there's no reason to use a vendor custom API. We will do IPC for sandboxing. All that is our tasks that at least some of them are part of the pipeline handler, which is a vendor-specific component. But we don't want code to be duplicated even if that code is open source. So we'll have help and superclasses that the vendors will be able to use to minimize the amount of vendor-specific code. In the previous diagram, when go a few slides back to the architecture, yes, here, I mentioned the image processing algorithms, which are closed source. But the pipeline handler, that's a part that's device-specific. We expect the SoC vendors to provide that. But that's going to be open source. That will be part of the framework. And to offer support for camera, the device vendors will have to upstream that code. So that's not something that they will be able to isolate. The image processing algorithms will be loaded as plugins, external binary blobs. The pipeline handlers, they will be compiled in. They won't be able to ship anything close source there. They will have to explain how the device works, how the pipeline has to be set up. And then they can differentiate where it really matters. So back to where we were. Vfile to compatibility. This will be handled the same way that libvfile does today. We'll have a completely transparent layer, a shared object that you can LD preload inside the address space of your process of a regular Vfile to application that will intercept the libc files. And we'll just forward that and translate that to the lib camera API. You might want to also expose an API that can be called directly like the libvfile does today. We haven't really taken a decision there. But it's very important to have a system that's fully transparent because we want to support close source application as well. The Android compatibility. I already talked about that. There will be at least initially possibly some components like JPEG that might not be part of lib camera in the first place but that are required by the Android camera API. So part of the HAL might be implemented outside of lib camera. Android defines multiple hardware level support. We'll start with the simplest one and as time goes we'll make it more complex. The goal is to implement at least the full level compatibility and probably go behind it. Last point I want to mention. This is not a vendor project even if we are backed by SSE vendors who have expressed interest even if we have seen interest from the Chrome OS camera team. This is a project that's hosted on linuxtv.org for the source code repositories. You're all welcome to contribute. We have public mailing list. We have a public ISC channel. We have public ID trees or Git tree at the moment. So everything is public. We won't do development behind closed doors. Everything that is developed will be developed upstream first release very often. That means that initially at least it's going to be very unstable but that's the direction we want to take. And now that's been nearly 40 minutes. Any question? Yes? You want a microphone? Thank you. Can you elaborate? So you said we want, we will and stuff like this. Who is we? Very good question. So there are four people working on this project right now. They're all in the room. Won't be a surprise if I tell you that I'm working on that. Yes. Jacob Ho is sitting over there. Nicholas and Karen are all part of the mysterious leap camera team. We hope to expand later because this is going to be a community project. This is a community project. I mentioned interest from SOC vendors, which I won't name at this time but I mentioned also interest from the Chrome OS camera team. They want to contribute to this. And we're having discussions we'll try to get the Android team involved as well especially if it's more difficult. And if you think about anyone from the industry that should be involved we'll contact there. So how does it work with the IPC and passing big buffers of memory between the 3A algorithm implementation and the limb camera? So we're going to use DMA buff to share buffers so what's going to happen is that you will just pass the buffer handles so file descriptors over IPC and the 3 implementation if they need to access buffer contents we'll just embed those buffers inside 3 implementation and access the data. So we will not copy data in Marshall data but just the buffer handles. And this is sandboxing safe and that's okay. That's okay. Thanks. Sometimes the 3A algorithm is part of what the SOC vendor don't want to expose but also some of them don't really want to expose ISP at all because they really think that their IP is how to interact and even exposing how the ISP affects the image is a leak of IP somewhere. So how do you plan to deal with that? We can't do miracles. If you think about the GPU world that has gone through that initially vendors were very very secretive. It's getting slowly better to some extent. On the camera side we do have vendors that play fair and start working with them. We hope to show that adopting this will be a very good argument for vendors towards the customer. So we hope that this will push other vendors to do the same. I mentioned Google and Chrome OS as you may know Chrome OS wants to have everything in a kernel open source and nearly everything in user space open source as well. They understand that 3A they can't force the device vendors to open source 3A the same way that they can force them to open source the GPU binary blobs. But we also hope to work with companies in the industry who can put at least some pressure on device vendors, on SOC vendors as well. Question over there? A simple question. At the beginning you mentioned that you were open for well I'm part of Gstreamer so usually I might be the one writing the Gstreamer plugin. Possibly. But there was some ambiguity. You wanted to include it inside your repository or you're referring to this again? We do both. For the V file to compatibility layer and the Android Camera Hall it's going to be under control of Lib Camera. For Gstreamer we open to both models. In the initial discussion we thought it would be part of Gstreamer. If the Gstreamer community prefers to have that in Lib Camera that's an option as well. In the beginning at least if we start implementing that component very early it's probably easier to have both in the same repositories because both we level together and then possibly move to Gstreamer. If they build together at the beginning it's easier because we're breaking the API. Okay thanks. One last thing I want to mention is that everything I mentioned I explained here that's kind of high level. If there's any feature that you want to see implemented, if you have any comment on the architecture, if you think no this is not going to work for this or that reason, please get in touch with us. We're really open to feedback and this is not set in stone. So we already know that the architecture and the lease of features will evolve over time. So that's probably not for me because it will not work for what I'm doing. We want to make sure that it works for everybody. Yeah, well you had some statistics on the frame rates so to synchronize the frames and what's the load in the system because if there's like dropped frames and not consistent frame rates since it's going to be like completely configurable from the new drop. Right, so we certainly have APIs to interact with the frame rate. You have time stamps you have different features there but having support building to have statistics compute the number of frame drops and features to have debugging I think it's something that's very interesting that we would like to have so that's not planned yet but that's something that you mentioned I think that we should have that yes. So please don't hesitate to post the you wish list and the features you would like to have on the mailing list. So are there going to be any examples of how to use the library correctly in the source repository? There will going to be at least one test application there there's going to be a test suite and at some point but not so planned yet we probably create a graphical use interface application that we'll interface with that. We're running out of time I know that people want to go to the end game so I won't keep you any longer or one last question from Mauro? Yeah actually it's not a question I'm planning to port Kimorama as soon as we still will have something work in order to work your live camera so it is already a graphic camera application. Yes. So if that we will have some example on how to do it on another application. Well thank you for the future contribution. Okay thank you for attending the talk.