 Fy enw i'n gwneud, wrth gwrs, sy'n ddod i'n ddod i'r gwaith, ond rwy'n gwneud o bobl ydych chi'n gwirio'r ysgol. Felly, rydyn ni'n cyrraedd Ciarin Bigham. Rydyn ni'n gweithio'n ddweud o gydag, a ddiwedd yma'r 2018, rydyn ni'n gweithio'n gweithio'r ffordd llyb camerau. Rydyn ni'n gweithio'n gweithio'n gweithio'r ddweud o ddweud o ddweud o ddweud. A oedonny'n ddweud o'r ysgol o'r cyfnod o'r ffordd. Yr ysgol yn ymracellu cyfnod o'r Llyb Cymru yn ymwysig i'r ffordd, ond nid o'r ymwysig i'w gwybod hi'n dweud o'r ysgol, ond mae'n gyfnod o'r ysgol yn ymwysig i'r Llyb Cymru. Ond mae'n cyfnod o'r ffocws yn ymwysig i'r ysgol, a'r ysgol yng nghylch hyn o'r ffordd a'r cyfnod o'r ysgol. Felly ydych chi'n gwybodaeth, cymryd wedi cael cymryd, maen nhw'n gwybod yn llwyddu cyllewyr ac mae'n ceisio'r cymryd yn ffŷfŷl yw'r hynny. Mae'n ceisio'n cymryd yn llwyddu cymryd, mae'n gwirionedd ar yr hyn i'n gwneud i'r tro i chi. A gydag ydych chi'n gwneud ar y cyfryd yn llwyddu cymryd, ac mae'n ceisio'r cymryd? Ac mae'n cymryd yn y dyfodol, We did a demo on Wednesday, two demos on Wednesday using lip camera and I'll try to cover that a bit as well and then one of my many hats has been community support and I feel a lot of questions from people trying to now use lip camera and so that's driven a part of this talk and where I want to go after that is making sure that everyone can use lip camera easily to use the cameras. A lot of my talk actually references work done by other people. In no means do I want to claim credit for work that other people have done so unless I've actually said I did this I'm probably already talking about work that other people have done. I'm not going to take credit for writing chromium for example and I've used lots of logos and they're all there trademarks. So cameras are becoming more complex. In the beginning there was a lovely world where you would applications would talk to one device. UBC cameras are a single video node and even as we introduced the SOC camera implementation which began to support MIPPY CSI connectors and these complex camera environments then on the left even that was exposed to a single video node and it could be controlled easily from an application. Back in 2009 that soon grew to a much more complicated graph. We've got sub devices for the sensor, VCM for lens focus, then we've got an ISP with both inputs and outputs and the complexity grew. That is in 2009 whilst we have some ISPs that are maybe of that order of magnitude. We're talking to ISP vendors now with let's say one or two times magnitude greater complexity with one that has 96 DMA nodes. Imagine if your application had to open up 96 slash video all the way up to 96 to operate the camera and it's impractical. Applications can choose to manage all that themselves and some do. We do have examples that already do that. They can open the media graph. There's a media controller system which exposes all of these objects and you can open that device to get the links and entities and then the application can open the sub devices, configure them and get the video nodes. It doesn't scale and particularly in the example that I know exists they have to have manual configuration for each platform they support. That support that they add only works for their application. On that device they can't then do a web call or connect G stream through the support they add. It only works for their application. It just doesn't scale. We created lib camera to make sure that all of that management can be handled in a central place. Lib camera itself doesn't look simple from that frame but all of that is majority in the internals of lib camera which helps the ISP and all the silicon vendors implement this support. We provide lots of framework to handle all the commonalities between platforms moving buffers allocating memory configuring the stream. The little section on the green is the part that vendors can create to support the platform. So now all that complexity is handled within a single place and in this slide applications can sit up on the top so they don't actually have to worry about everything below. So that's our solution but why did it come to be? Everybody loves FIFA R2. It is mature, stable. I was chatting to Hans last night. I believe he's been involved since 2003. Marrow a little bit before that. The designs originated in 1998 and the patches were merged in mainline around 2002. Maybe don't quote me too much on the exact dates but it was that order of magnitude. And it provides an easy way for applications to get a video. That's what they want. That's what video for Linux provides. It handles the formats and the configuration and it's the same whether it's a camera, a set-top box, an HDMI receiver. I hope it's not too cheeky but everyone loves it because it's there. There may be other implementations for dealing with hardware out of three but in Linux we have FIFA R2. That's the go-to system for applications to use to get video. And it's well used so we know it works well. Because it's been around for so long, absolutely everyone uses it. Existing camera applications, VLC is very happy to open up slash their video zero and present your webcam in a little box and see your picture. There's dedicated camera applications like cheese or GUVC view that I often use for testing. And some of those can be based around multimedia frameworks which handle the underlying hardware or interaction or even the VFR2 implementation for you like GStreamer. I should really have had FFMPEG in there as well but it threw off my symmetry. Then on top of that we've got the application frameworks like QT or GTK which again provide applications an easy way to access all of this for you. OpenCV comes up a lot with cameras for what we do so that's important to consider there as well. So that kind of covers the use cases where users want to get a picture, either display it or then process it. Of course on the right everyone's using that in the consumer use cases now with their laptops or phones making video calls or the browsers accessing the video directly. All of that is looking fundamentally for a VFR2 video device within Linux. For the browsers and native apps quite often that's going through Live Web RTC which is very important at the moment. So I started with everyone loves VFR2 but as the complexities have grown and the software has evolved it's become more difficult to use. And what I think everyone really loves is video zero. Application wants to open one thing, configure it and get their pictures. It's really easy, it's what I used to use and that's what people or applications like to expect. As we built up the complexity of multiple sub devices and configuring hardware specifically we introduced the media controller. I remember when I first came across media controller I was thrown back by oh I have to do all of these extra steps and the command lines are so tricky to get right if you have to custom script. The formats can be really tricky. On top of that you are now responsible for making sure that the desired format you want as an output is propagated all the way through every device in your stream. And that can be quite tricky. It's fine if you want a single use case you can hard code that you do it all the work once you hard code it in a boot up script. But to do that generically for an application that just says I want a picture and I want my camera to work on every platform becomes very difficult. The sub devices you now have direct control over a scaler or format converter but you have to know where to look in the pipeline and which device to configure and how to then configure it or find it. And even my beloved UVC which was once the simple case of just one single video node now has a second video node that you can open and access more data from the camera. And you have to know that and when that was introduced some applications became confused because they they see video zero video one they try to open video one thinking it's a second camera but it's not it's actually still the same camera. And so even then as we go into the types of cameras I deal with which are the Mipi CSI receiver Mipi CSI cameras we have. We have for an offline ISP we'll have a CSI to receiver to capture the raw frames will have an ISP with that we then have to inject those frames and pass them from one device to another. That may then give us multiple outputs we might get parameter buffers so statistics and multiple streams different configurations from a single capture might give you different images. On the latest hardware that we've got from Rockchip and XP for example we have yet another block after that so now we have to go through three devices to get an image and it adds a lot more to each application to deal with. Even further VFRO2 isn't enough on its own because the underlying hardware the sensors need to be configured for the exposures on the gains and that requires some control loops. So we have to capture an image we have to run it through the ISP the ISP will tell us some statistics and some information about whether it's too bright too dark and we use that information to then make adjustments to the camera. So now we have a control loop and a feedback loop that we have to do and that's not appropriate to handle in the kernel so that is expected to be done in user space. And these types of devices are now in everyday devices they are in your mobile phones but Android kind of covers that separately but there are laptops you can buy, tablets and if you want to put Linux on one of those devices then you have all of this complexity. So these issues are rapidly becoming more apparent to the average Linux user. I'm not sure what an average Linux user is but. On top of the now consumer facing products we've had this issue for quite some time embedded systems have been using CSI cameras for quite some time now and bringing in these extra functionality. And I'm now, well we now have mobile phones that are targeting Linux. There's the Purism and Libram making devices that actually target a Linux stack and the mobile phone market really wants to cut costs and improve image quality and you get that by having these ISPs where you get full control over the image. Of course with Android a lot of this can be tackled with a proprietary stack and is often tackled with a proprietary stack but within the Linux community we want to avoid that. So it's all well and good saying that v4l2 needs more and that we've got a solution for Lib Camera available but now applications have to do something different and they don't like that. I felt a lot of resistance suddenly applications find that their users say hey I want my application to run on Raspberry Pi with a new Lib Camera stack and this can come as an unexpected event to software maintainers. Why do they now have to do something new? It worked before. So it does take effort now applications have to do something new. Part of that has it within the community support role I've had. I do remember it. I now quite often write C++ code but I remember that it was scary. So applications that run C code they can be quite hesitant of the plus and Lib Camera is a written in C++. We do expect more language bindings but that's kind of our native implementation. The one that is more painful is have we finished? Well the truth is no we haven't ever declared a release. We do have version tags and we'll come on to that later but we've tried not to be ABI stable because we know we're still developing. We've got a lot of work to do and we know we've got fundamental designs that we have in mind which are going to break the API and that adds further pain to anyone wanting to move to Lib Camera. So I want to go over a little bit about what you can do. I don't want to be too negative. You can already use Lib Camera. There are solutions already out that are working. We use it for testing and on Wednesday I made a demo which I hope can show that it's soon ready for the average user. If someone asked me how do I get started with the Lib Camera I'm probably going to point them at GStreamer to start with because of the extra ecosystem that you get with that. If you're trying to make an embedded application GStreamer provides all the things that Lib Camera is not going to do. We're not going to encode the stream. We're not going to send it out on the network. That's somebody else's responsibility and we're not going to deal with audio. Other frameworks and that could be FFMPEG too if we get integration into FFMPEG can all be handled through frameworks on top. We do have already an element which was written by Nicholas to integrate Lib Camera with a GStreamer pipeline. It's been a bit clunky up till now but we've just had a Google Summer of Code student over the summer improving that for us which has fixed a lot of the format negotiation which I really hope that's going to make everything easier for people. The example is just a use case that I have found handy in the past where I want to capture the video stream, send it over my network and view it on my laptop. Any GStreamer example should work with just replacing the Lib Camera source in our pipelines. Raspberry Pi have been fantastic working with Lib Camera. They jumped on board. They were looking themselves for how can we move away from the proprietary GPU implementation that it had. We had a nice collision of timings that they were trying to move to the open. We were trying to support this and that worked really well. They've jumped right on board and it's not been as smooth a transition as we'd hoped. I think everything's taken longer and as I say some of the users have found frustrations from this change and this new system that has appeared. But it really does bring a lot of advantages that now you can take a Raspberry Pi and almost any camera can work with any device. Raspberry Pi's old stack they used to support three cameras that they produced but now you can take cameras from Argycam or Vision Components and the responsibility or the ability to integrate that camera into a platform is open and I think that's amazing. Even the other way I can now take Raspberry Pi cameras and integrate those into other platforms. For me one of the best things about Lib Camera is I can take any camera and run it on any platform and we can handle all the glue in the middle. So they have Lib Camera apps. Sometimes I find that a little frustrating because the Lib Camera apps are written by Raspberry Pi, they're not written by us, but they are targeted to the Raspberry Pi platform. But it does provide a good sample point to show how the APIs can be used as well and I would like to think that those applications will run on any platform that is supported by Lib Camera but Raspberry Pi are under no obligation to support the other platforms because they obviously have their target. So on top of all that we've built Python integration and there's two layers to that. We have native Python bindings for Lib Camera which tries to match the API that would be the native C++ API. But Python is very well supported on Raspberry Pi in their old stack and they have Pi Camera so Raspberry Pi also wanted to make sure that there was an easy transition for Pi Camera users. So on top of the Lib Camera Python API is Pi Camera 2 and I've always, even on Pi Camera I was amazed by how little code you need to capture a picture and show it on the screen. And still actually offer quite a lot of flexibility. Lib Camera started out targeting Chrome OS devices with a lot of thanks to Google that was kind of the driving force to actually kickstart the project. So we do target an Android HAL implementation which is used by Chromebooks. So here I've got a Soraka device. It's an HP X2 I think. And this is my homelab at home with just keep a Lego man in front of the camera because I use it for testing. So this already works. There's still quite a bit of way to go on the Android integration but it is functional. We can make video calls with it. We can use the camera app. But there's several levels of support with the Android Camera HAL and we're constantly trying to further improve our Android implementation. What I like though is that even though again Raspberry Pi isn't supported by Chrome OS or Android directly there are builds that the community have now taken. And I've seen Chrome OS running on a Raspberry Pi using Lib Camera and I believe that there's been an engineer who's taken the same path to run Android on or AOSP on a Raspberry Pi using Lib Camera. We have one last hope for those who really don't want to do anything in that we have a VFR2 compatibility layer. We kind of think of this as our last resort. It's not expected to be efficient or full-featured. But using an LD preload we can wrap all the calls that an application would normally make to open, close, iOctol for the commands going to VFR2 themselves. And we can present a Lib Camera device as if it was a VFR2 device. And that sounds great but it's not necessarily going to be the best solution for everything. But we created a little script called Lib Cameraify. No one has to care about all this underlying implementation. You can prefix your application with Lib Cameraify and it should abstract a Lib Camera device as if it was VFR2. That only works if your application is using these calls in the first place. So it's not a catchall or a saving grace for everything. And it's done these a bit more developments so we've got more to go there. Internally in Lib Camera we have our own test tools which is actually what most people see when they first start using Lib Camera because they're built in and it's the tools you get when you compile it yourself. So we've got QCam as a QT5 application which also serves as a bit of reference code for how to interact with Lib Camera. And Cam is our Swiss Army knife which is kind of not ever expected to be an application for users to be a product or anything like that. It's a test tool but that gives us a lot of flexibility on how we manage and operate the stack. The pictures themselves, I put those there intentionally, the kernel provides test drivers, Vimsy and Vivid. And that's really helpful for testing all this stuff without hardware. You don't actually have to have a camera connected. Vimsy provides a sensor type interface and Vivid is really great for me because it produces almost every pixel format so I find that very helpful for testing advanced features. Oh, I meant to say actually, QCam is getting, had a summer of student, a summer of code student on it this summer as well. So that has been improved recently and we're going to have more control support in there. And we've got a really nice feature called the capture script which lets us produce test scripts so it can set commands to operate the sensor and make sure things are repeatable. My colleague Jacopo gave a presentation pre-COVID now where he did live coding demo of how to use the camera. I thought that was a great talk and very brave of him to do live coding. But what that produced is a utility called SimpleCam and whenever people ask me questions about how do I get started with the API, I'll put them here because it's completely over documented application to show this is why you're opening this or why you're handling that. So it's very helpful as a starting point and I actually also use this for my testing because I make sure I keep it up to date. Every time I do a build or our internal CI builds, it gets run just as a fully external project to validate that we can still capture. The tool itself only runs the capture and prints the metadata. It doesn't deal with any presenting of the frame. So now on to the fun bit for me. The demo that we had on Wednesday was all about trying to show more user-facing use cases for the camera. And back in May, I met up with the PyPy developers George and Wim and I went there with a single goal. The surface go to here that we've got. There's a big community of Linux surface users and they installed Linux and the camera doesn't work until the camera. And they're doing all sorts of work arounds to be able to make a video call on their laptop. I felt bad about that at a personal level even because it should be better. So when I met up with PyPy guys, I really wanted to see the ability to make a video call through Chromium just going to meet.jtsy and have that work. This was the last hour of the week and we had it working so I'm smiling. But you'll note the colours are wrong. There's two windows because we actually had to do horrible work arounds. But the proof of concept was there. And the general premises that we have are any supported devices going through lip camera into PyPyWire. And I said before that there's a lot of things out of scope from lip camera. And security or managing access control is one of them. But PyPyWire provides that. If you use your phone and it pops up to say, this application would like to use your camera, do you want to allow or deny it? Lip camera is going to always say yes. But PyPyWire and the portals are going to provide that ability to say, to present that opportunity to the user to say, yes, I'm willing to capture my image. So the HDG portal is quite important for that. And Pengu Tronix, I'm not sure I can see. My Mikael, I'm going to pronounce your name wrong, I'm sorry, did amazing work. And so I went to this PyPyWire hackathon and almost that week I discovered the work that Pengu Tronix had been doing. And everything I wanted, this connection between PyPyWire and WebRTC, they'd been working on it. And I was very happy because that's what I was able to integrate to produce the demo by the end of the week. And that allows Chromium or Firefox or the browsers to then talk to the web comfort and utilities. And so that's what I had my goal then to present here this week at the demo, which I'm glad to say we succeeded. So we actually had two demos, but the one that I'm talking about here is the lower half where we had a Raspberry Pi talking to a Surface Go 2. Both of those devices require live cameras to operate the images. And I just had them going as a video call between them. And so we were set up in the middle of the hall and the colours are now right. So I managed to fix that on Friday night, so I was happy. There is still some work to do. We need to help see Pengu Tronix's work fully upstreamed. There's still some negotiation of formats to handle within PyPyWire. For the demo, it wasn't all smoke and mirrors, but I did hardcode the sizes just to make sure that the demo worked. And unfortunately the codex didn't work. I wanted to let people join the call as well, but if I started opening it up it probably would have crashed. I wasn't bothered about the decoders. I wanted to ensure that the cameras were functioning. So it's getting there. I hope that within six months this is going to be much more generally available. And it will help wherever we can with Pengu Tronix and the integrations. So I said a lot of this talk has been about my role of lib camera being the community sport pat. So I want to go through some of the applications that have contacted me and not just about work that we've done, but where other people have already started using lib camera and what other people can do. This I've had quite a lot. Motion is an application that people can run on a Raspberry Pi. And again, as I said, we're now in a position where users start trying to take on the lib camera stack. And that bumps into applications that, almost out of the blue, they're saying, can you make your application work with a lib camera? And they weren't anticipating that. We couldn't tell the world you all now need to switch to lib camera before it was completed. So the motion developers have a GitHub repository and there were issues there that I ended up following, looking to help people to support it. And I did try to help propose lib camera native API implementation for motion. But again, the C++ was rejected. He didn't want C++ within his framework. And that's understandable, but he did also have this reason that he was trying to rewrite his application in C++ with Motion+. So it's not the end of the world. And while it hasn't been done yet, I believe there's scope for native lib camera integration within Motion+, which will help all the users. The lib camera fi script that I mentioned about the V4L2 preload, I have heard that that works in some use cases. I've also heard that people have found it doesn't always work. So, unfortunately, I can't test every application in every combination. So in that kind of scenario, I need people to say what they found didn't work and report it. We will try to help fix it. This one comes close to home for me because I have a 3D printer at home. And I run OctoPrint and it uses MJPagestreamer. And like many users of OctoPrint, I have a Raspberry Pi with a camera so that I can get a live stream of my 3D print. And, of course, it didn't support lib camera, so it broke. I haven't really expressed who RGCAM are, but RGCAM make camera that are now supported by Raspberry Pi devices, a target of the Raspberry Pi. They actually support a lot of other platforms. And so they've been trying to improve lib camera support as well. And they actually took MJPagestreamer, added lib camera support and made it work, which is great. But now I see this split of the community of we need to work out how to make sure that that is either upstreamed or managed. But with the nature of open source software, MJPagestreamer itself is also fought from an old source project. So I'm not even 100% sure what the correct route for upstreaming that is. The very frequently asked question is open CV support with lib camera. We have advertised, again, that we're trying to get more engineers to work on this. We advertised that we would support a Google Summer of Code student to work on open CV integration with lib camera. Unfortunately, this year, no one took it up. But I believe open CV is a great target to add lib camera natively. It may even go through pipe wire in some platforms, but it can already use the GStreamer element. You can use an app sync and app source to use a GStreamer pipeline to get data into open CV. And we have a GStreamer element for that, so it can already be used. I said about the lib camera if I, this came up again. I had one user using the VFR2 adaptation layer, and he then found that he couldn't set the frame rates. So he added support, he's got a patch. It's not yet upstream and needs a bit more work, but all of these use cases I need people to try it, find out what does or doesn't work. Ideally, work on fixing it because we are finite resources, and I will always help anyone who is trying to tackle themselves. And then we can work to improving the ecosystem. The one at the top I felt like deserved a special mention. Eric at Red Hat is creating a little application called TwinCam, which is a nice pun on the name. It's derived from our CAM utility, and they're making a fast boot camera for automotive use cases. So you boot up and it will instantly show the cameras. And he's been very active in our community supporting us with fixes and bug reporting and development. And I think features he added SDL support to CAM, which was nice. Constatis is a really interesting project that will let you set up a web page. You can tie in cameras and present them through a home server. And the developer there contacted us through IRC. He asked a few questions, but he just got on and did it, and he implemented native live camera integration into his platform. So that already supports live camera directly. There's more that I would like to see use live camera, but we need either those applications or more engineering resources to work on that. Megapixels is an interesting topic that is ultimately the example where I've said they've chosen to manage all of this complexity themselves so far. And then they have a couple of any files per platform, and that means they know which device is to open static configuration of media controller. But it still doesn't scale, and it means that on those devices you can't now make a video call. And more than that, on the Pinephone Pro, with the RockChip 3399, you have an ISP. And someone has to do those 3A loops to manage the ISP. And so we do believe that Megapixels could use live camera natively. Dorota from Purism is working on this. So they are looking at it as well for their Purism phone. The Nome Camera app is a really nice redesign of a camera app, almost to complement or replace cheese. It uses LibAperture, which is all part of the same project to abstract how the camera. LibAperture I could certainly see could have native live camera integration. Or, because it is the desktop application, Pipewire Camera Portal would be a really good use case for that. So if anyone wants to work on a Nome Camera app and integrate XDG Camera Portal into that, that would be a brilliant way to spend your time. And I'll help. And I'm sure there's plenty of community members that can help with that. So beyond that, we've had other people ask questions on how to use Lib Camera or how to do things, but they haven't always told me what they're building. So these are kind of a summary of the ones that I'm aware of. But if you are creating something with cameras and I see a lot of faces or I hope there's lots of people using it now, please let me know what you're doing. I want to know. I want to help. I want to make sure it works. If you get bugs, I will help you fix them and we will help you fix them. We just need to know about it and how people are using it. So what can we do? I've said how it's harder for people to now use cameras. And a lot of what I felt is I see a lot of resistance to change. There's a lot of pulling. So in my mind, I have to find ways to make that easier for people to realise that it's something they can do. With the advent of platforms like the Surface Go where this is going to become a requirement, your app is not going to work on a surface or even the Dell or HP laptops that have these cameras. You're not going to be able to capture images from the camera without Lib Camera. So there's going to be that impetus that they need to do something. I hope that Lib Camera will make that visible to them that they can do something about it, but we need to make it as easy as possible for them. I've already mentioned that I really hope to see the WebRTC integrations and the Chromium integrations to get upstream as soon as possible and we'll help where we can. But that will fix real users, the majority of real users use cases where they want, they've got a laptop and they just want to make a video call. So that's ongoing development. I haven't yet posted the slides but I will after this and those links can be found. But this is kind of my last point I hope. This is the tricky part that we've tried not to declare a stable API or API and I've fought both sides of this. We've always still now said, well the best version of Lib Camera to use is the latest. But I don't think we can continue to say that. Distributions are screaming for a tag. They just want to say this is the version that we're using and we do provide a version string that is generated. But they need to be able to say this is the version of Lib Camera that we are providing because now applications need to be able to build on multiple distributions so if we're not going to have a stable API they need to be able to say I build against version 0.1 or 0.2. And I fully accept that we have to do that. So I'm going to work on this next week. It is coming as soon as I can. And more than that we need a better CI infrastructure. We have homegrown CI builds that runs on my PC at home, on Laurent's PC and that's not scaling. So I've been talking to Dan Stone even this week and I would like to see integration of Lib Camera on the free desktop platforms or even GitHub actions and those CI scripts can be integrated in Lib Camera project themselves so that if you make patches to Lib Camera you know that they can be tested and if we have runners in our labs then they can run on real hardware. But I think we need to do all that now which I will be working towards. I want to say thanks to everyone. Like I said this talk is really about other people's work. Particularly the pipe wire guys who have from the start supported Lib Camera without needing much from us at all so that was really good for us. Pangotronics, thank you. You have made my world better. Like I said I went to that hack fest in Berlin with a name of within a week let's see what I can do and almost on the first day I found someone else has done it. I just had to do integration so that was good. Raspberry Pi of course they have been the most visible user of Lib Camera and that has actually been beneficial for us because now we have lots of users not all projects like users but they do help find what we need to do. And the other platforms that we have got are Helping Drive, Algorithm Development to make the support better for everywhere and of course Chrome OS has really kicked off the project so thank you all and to everyone I have missed. In summary cameras have got complicated and they are everywhere and these issues are going to be on devices you want to buy. I won't go into the IPU 6 discussions where Greg said do or do not buy a specific device but we are getting better. We have got more support for applications coming through and pipe wise providing what we need for the desktop environment and within the framework of Lib Camera we are trying to make it easier for applications to use our software and I am out of time more or less so have we got any questions, thank you very much. You've got one minute for questions. I have an online question. Oh we have an online question, yes. That is what the demo was. So yes the IMX8M Plus is supported. I've got a whole different sphere like a 12 look about that so please find me afterwards if you want to. We have kernel patches, they are going upstream. We have Lib Camera support that is as good as in and we are working actively on developing the algorithms to improve the image quality. So yes it is imminently supported. So you mentioned C++ being an issue for C programs. Do you actually have C binding or do you plan on writing them? We do not have C bindings. I think it may be useful in the future to provide C bindings but we haven't got resources to do that right now. If you want to partake in that please do. But GStreamer is a C API so you can use GStreamer as a C interface into Lib Camera already. Anymore? Okay thank you very much.