 So we're here at Embedded World 2019, and who are you? Hello, my name is Giroudi, I'm the VP of Business Development at Collabra, and this is... My name is Robert Foss, and I'm a Graphics Developer at Collabra. So what is Collabra? Collabra is a global consultancy specializing exclusively in free and open source software development for customers. Free and open source development, consultancy. Yes. So you have lots of work to do? We are quite busy and we have fantastic engineers around the world that are helping our customers adopt existing free projects or work on new projects for them or helping them take existing code internally and turn them into free software as well. So it's been keeping us really busy. And let's check over here. We walk around open source. You're talking about open source Mali GPU driver, are you? This demo right here is running the very recent Hanfos driver, and as you can see that's been working. It's working and it's been available off-screen in the open source repository for about a month now. You're running on the RockPie? Exactly, this is one of the RockPie 4, which is a RockShip SOC. And made by our friends, our Raksa, which is a really nice house and this is one of our favorite SBC boards available today. This is a 3399. Exactly. This form factor right here is dual USB 3, USB 2 and a whole bunch of stuff. So have you at Collabora been working on the Hanfos GPU driver? So we've been part of the graphics community in free software for a really long time. Going back to X11, X.org and all of the iterations before that, embracing the Well-End Specification when it first started and continuing beyond that. The movement started with Fridrino and the V-Vintage drivers, the Naviv, etc. made it natural that eventually there will be a Mali GPU driver in free software as well. Robert and the rest of the team have been involved in this initiative for quite some time. And as Robert just said, about a month ago, early January 2019, the user space portion landed in Mesa and Robert and others in our team have been involved in that. We cannot take all the credits. It's very much a community effort. There's been people out there doing great work on all of those free software GPU drivers. The Panfrost driver is just the latest iteration of that. So how much have you been doing with the Panfrost? So we've got a developer working 50% of his time, I believe. And that is like a third or a fourth of the total amount of developer hours that is being poured into the Panfrost project at this time. So what is it specifically you've been doing? Are you trying to get it to the same level as Fridrino? Or how far is it so far? So getting it to the level of Fridrino is definitely the goal. The Fridrino driver is very stable and mature by now. But that's a long way. And we've only started taking the first steps here. As you can see, it works, but this is a very simple demo. But we want it to be performant and compliant. So how far is a long way in open source world? Well, if we look at some drug... Stuff happens fast sometimes? Exactly. There's a range. So it can take anything between like seven years to something much more reasonable in like four years or two years. How about six months? Impossible. How about if somebody comes and invest like a whole bunch of engineers to help or something? It really depends on what you want to do with it. If you want to have like a limited sort of support for your specific application, something is doable very quickly. But if you want broad support for every feature out there, it's going to take a bit longer time. So maybe in two years, you think it could be very smooth, hardware accelerated GPU, Linux OS on Mali devices? I think that's a very reasonable timeline. Give it two years and this will be a very different story. Very, very smooth. How smooth is it now? It looks like the cube is moving fine. Yeah, this is a 60 FPS for sure. But this is a very simple demo. This is what we use for development in-house. If this works, it means that we haven't done anything terrible, basically. So it's the lowest possible form of threshold. I think that the barrier of complexity has gone down quite a bit as well. The quality of the subsystems that those drivers need to support have evolved a lot from the original drivers that tried to do free GPU. So whether it's at the mesa layer or the kernel layer, all of that has improved quite a lot. Is it better than Lima? Well, Lima was struggling from the fact that all of the underlying APIs were a lot more complex. And they had a very complex kernel space that they couldn't really work on. This is a very different beast that now has access to simpler kernel layer efforts, et cetera. I think one of the demos on the other side, which is not running right now, is showing video integration for those interfaces as well. So that's one of the next progression effort of making these drivers useful is to be able to render video and different types of content as well. Is it the same sort of problem class? Yes, which is the Rx, it's the same SOC. Can you turn it on? It's gonna start up, so it is turning on right now. It just started one second ago. So it's gonna go through a complete recycling process, reflashing itself, booting over the network. This is a demo that's showcasing our continuous integration workflow, leveraging some existing free software, of course. We're using Jenkins, we're using Lava, and some of the other tooling from Linaro. We're very much involved in the kernel CI effort, and in this particular demonstrations we have here is we're showing a Odroid XU3 Exynos board, as well as a Samsung Chromebook Plus based on the Rockchip 3399 CPUs, basically cycling through tests that Guillaume over here has developed for us. Guillaume is one of the lead developers in our core group that focuses on automation. Test automation, yeah. So I've been using Samsung Chromebook Plus my main laptop for 18 months. Do you use it also? I don't use this particular model, but I have the older version, the RK3288 base one. And so what are you actually doing there? What is, you're booting all kinds of different kinds of Linux on it, or? Yeah, we mostly use it as part of kernel CI to boot mainline kernels, upstream kernels from mainline and stable and Linux next. So to boot it and do some tests like V4L2 compliance, the video for Linux compliance tests. So what's going on here? It's running with a Panfrost. So like the other thing you showed a bit earlier, this is running with a Panfrost free software user space driver for Mali GPU. So that's just has been in CUBE. So you have Panfrost as smooth and this the same. I mean, it's the same as over there, but... Yeah, but this is a different device with a different screen. It's a very high resolution screen. So 2400 times 1600 I think. So, and there, is it the same rock chip or what is what they're doing? It's an Exynos, Samsung Exynos 5422, yeah. So an older chip, yeah. And do you have lots of stuff going with this too? On this one, so we've been running the IGT test suite automatically and that's also something we do as part of kernel CI. To test the DRM KMS stack basically. So you work on lots of ARM stuff? We've been involved with ARM for quite some time. We have been working with Intel as a customer as well and over architectures. We're very much agnostic. We're very much welcome. The RISKS5 group as well and all the efforts they're doing there as well. We think that's really an amazing opportunity for free software at large and then free hardware also. Now that this Chromebook has cycled through this testing as well, you can see an element of the demo that wasn't on the other one where this is the exact same panfrost driver running the MPV video player directly rendering to the screen as well. So you see a completely smooth 60 frames per second MPEG 2 or H264 video. I can't remember what's the matter. I'm just playing back on that Chromebook Plus. Is it VQL2 or? This is using the request API directly to the MPV interface on the RX3399. So smooth video playback. Oh, there's no hardware involved there whatsoever. It's completely decoded in the chip. And so what kind of customers do you have that require all these things? The ones that you support? The ones that we can talk about publicly are for example, the Chromebook team at Google has been a fantastic partner with which we've worked, enabling SOCs across a lot of the products that their OEMs are working on. And we are very thankful to Google for progressing so much of the ARM SOC supports in Linux and the various libraries upstream because of their dedication to the Linux kernel and the work that's happened there. So on the ARM Chromebooks, you are a big part? We're a part. I'm not going to pretend we're a big part. We're involved. The 3399 Chromebooks for example, or the previous one too. I don't know all of them of the top of my head but I know that our team has been involved with the Google engineers to make sure that the best performance, the longest battery possible is achieved with what is generally considered a very small engineering team and everything that Google has achieved there has been phenomenal and they deserve all the credit. And so that's kind of awesome kind of work that you're doing and how big is your team? The company overall is a little over 100 employees we're worldwide distributed in over 35 countries. We have one big office in Cambridge, England where there are more noble prices than anywhere else in the world and we're also an office in Montreal, Canada because we love our Canadians and we love our guys in Quebec despite what I might say behind closed doors. And yeah, we have a very big team working on LibreOffice as well so we're very proud of all the work that Collaborate Productivity is doing on LibreOffice and that's alone like 15 or 18 engineers that a lot of the people watching these might be using so please know that the Collaborate LibreOffice productivity team has been a big contributor to that. A lot of the... So if there's a customer somewhere out there that wants more LibreOffice? Yeah, absolutely. So we have various people that are adopting Collaborate Online which is a hosted or on-premise online version of the Office Productivity Suite that really works very well for transactional workers and people that have specific needs and known templates and environments for document editing and spreadsheets. Is that Google Docs kind of? It is kind of for people that have very, very specific need that want to control the data that are sensitive about the deployment so either public offices, potentially military adopters or contractors that have very strong ISO 27001 the compliance requirements for example or things like that, maybe EPAC compliance in the US where they just can't put their documents somewhere on the cloud and just think that things are going to happen nicely. They need to control everything on-premise, their hard drives, their storage, their network, their files, their document suite. Maybe it runs on a NAS or runs in some kind of server or something. Yeah, I mean, typically these customers are fairly large enterprises that have multiple servers serving up like hundreds if not thousands of desktop users. And do you also do stuff with the multimedia? Yeah, here we do a few things around multimedia. I'm not very familiar with that part of the company so I'll let Olivier take over. Hi, so who are you? I'm Olivier Kheid. I lead the multimedia team at Collabra and so our team focuses largely on the hardware integration of multimedia technologies. So things like making video hardware decoders and encoders work with hardware and doing things like zero copy from the capture to the encoder or from the decoder to the screen or from the decoder to the encoder for transcoding to a scalar, et cetera. All of these integrating all these hardware blocks in a way that makes them useful. We also work a lot with streaming technologies. So a lot of this video and audio and that was being sent over the internet and there's all kinds of formats and compatibility issues and all these things that we help people do. We've been over the last 12 years the largest contributor to G-streamer and this is really a fantastic framework because it makes it really easy to do hardware and enable things because of its modular nature. But also- A largest contributor. Yes, we've been over the last 12 years. And lots of the big part of the world is using G-streamer or what? Yes, so G-streamer is widely used in a lot of consumer product that people don't know. For example, almost all LG and Samsung TVs all the smart part is G-streamer. There are both major players in the inflight entertainment business so the little screen in front of you when you're in a flight, they have G-streamer in there. A lot of cameras, security products, a lot of the hardware bits that you see around these shows that have video in it use G-streamer to connect all the various bits of hardware. And how much of the V4L2 is related? So we, one of my senior engineers, principal engineers, Nicolas Dufresne, is the maintainer of the G-streamer V4L2 integration and he's been working really hard with the kernel community to grow the V4L2 API to a point where it is actually possible to write the user space that is the same across all different kinds of hardware. So we've been very aggressive, I would say, in ensuring that the kernel people write drivers that have the same API, whichever hardware there is, by refusing to put any hardware specific hacks in G-streamer, but only adhering to the standard API. And this in the last, maybe four years, three years even, there's been a huge progress there from the kernel people in, especially in the codex space, and finally having a unified user space code instead of having to have a user space hacks for every hardware vendor. And the Codi people, they use all the stuff that you've done? So Codi, they write their own for a lot of everything. They use V4L, but everything above it is really custom. They're more in the traditional way of working, where they were completely okay with doing hacks for every single hardware, because they were, well, they're a product organization and their goal was really to get the product out and working instead of trying to make the infrastructure work for everyone, which is what we do as a hardware. It's really hard for work to do video stuff, no? Well, it's engineering, right? It's just software. How hard can it be? That's what I thought everyone. But it's like you need to optimize things and try to reach a smooth playback of a certain, certain, what's it called? Sometimes it's like a little bit of a challenge. You get to know- Yes, absolutely. And so video is a strange beast because you have a target and it's a very fixed target. So you want like your 60 frames per second. 59 is not okay. It's not unacceptable. If you go to 61, no one cares, right? You just have to reach that magic to 60. So if your hardware is just on the limit, it's very, very challenging. And if your hardware can just do 60 or 60.5 and you have to get every single bit, and then next year's hardware can do like 70 and all of this hard work is pointless now because the hardware is super fast and you don't care anymore. Cool. All right. So thanks a lot. So what's next with Collaborate? Lots of different things. Yes, of course, as I was about to say earlier, there's a lot of industrial customers that we can't mention that are using free software and open-source software inside their products. And we're part of the companies that are helping them make those and we're continuing to do that and it's keeping us busy. And we look forward to 2019. I think that the open GPU drivers as you identified yourself are great momentum and they're going to make a lot of things really good for the entire industry. Think Risks 5 is a very attractive and very interesting perspective as well. The Linux Foundation just announced Elisa, which is an effort around more real-time support for the Linux kernel, et cetera. So I think there's a lot to look forward to in 2019 and beyond.