 Hello everybody. Thanks for being here. I'm Juan Sánchez. I'm going to talk a bit about WPE WebKit and how we have created it to enable the use of HTML5 in low-end embedded devices. But before starting with the actual content of the talk, I'd like to tell you a bit about myself and the company I work in. I'm one of the co-founders of a company called Igalea. We are an open-source consultancy. We created the company in 2001 and work globally for customers all over the world and have also distributed the team of 60 engineers currently. We are an open-source consultancy, which means that we have teams that work in different open-source projects in areas such as browsers, multimedia graphics, compilers, or software-defined working, among other things. And particularly in the case of browsers, Igalea has been, during the last 10 years, one of the top contributors to the main open-source browser projects. For example, WebKit, Chromium, or Firefox and all the components that are related to them. And what we do is basically helping companies in industry which want to use this piece of software in different kinds of devices. In many cases, things like tablets, phones, smart TVs, automotive, or lately in the last two or three years, more and more kinds of a variety of embedded devices, which is what I'm going to mainly talk today about. So, the talk, I'm going to try to split it in three parts so that it's clear. The first one is going to be an explanation of the problem that we want to solve and why we came up with this new solution that is called WPE. In the second one, I will get a bit more into detailing about the architecture of WPE and the functionality and how exactly it works internally. And in the third one, I'll talk a bit more about how the project is nowadays and where we are going, how we are thinking about the future. So, let's start with the first part, the problem that we want to solve. I guess this is not a secret for anybody here, but just in case. As you know, many embedded devices are getting sophisticated. It's very common today that they have some kind of GNUR Linux version with a touch screen and people building them want to run apps on them. This is quite common today, as I said, but it will be even more common. We know that many companies are working in new versions of their embedded devices that will be more similar to this. And at the same time, the web is a powerful platform, it's very flexible and probably more important than that is that many people know it and it's kind of in the comfort zone of a big amount of developers. So, it's very common to see that these manufacturers want to put HTML5 applications in their touch screens. Also, in many cases they use, in many of these cases, the exact case that they want to solve is a kind of kiosk mode full screen browser where they are running their applications. So, this is kind of the configuration of the use case that we want to target with this technology. And of course, it's still low-end hardware in many cases, hardware that doesn't have a lot of memory, that doesn't have a very powerful CPU, that typically has a GPU that can be used, but of course there are a lot of optimizations that are needed compared to more powerful hardware. So, now that we understand the problem, the question here is, okay, which solutions can we use for this? Of course, the solutions need to be focused on being lightweight, as I already explained, and we don't need to solve all the possible use cases of a web browser in this particular scenario. We are saying that we want to solve a very specific case, which is limited. So, we need to look into the different alternatives that are available in open source, and see how good or bad they are. And the main three ones, the obvious choices for everybody are Firefox and their related technologies, Chromium with blinking its core, and V8 as the JavaScript engine, or WebKit. So, I look a bit into how good they would be for this particular problem that I just defined. The case of Firefox, as you know, I mean, it's a very stable technology that has been used as a browser for many years, but already almost 10 years ago, Mozilla decided that embedders or creators of new web browsers were not their priority. So, they are not providing an API, a stable API, and actually, since really 2006, 2008, many open source browsers moved away from Mozilla technologies because of this. So, it's very focused on Firefox as a product, and it has a quite monolithic architecture. Things could be more interesting now with Servo, this new project by Mozilla that tries to rewrite parts of, or potentially, the whole Firefox technology stack, but it's still too early. Servo is just partially used inside Firefox for now, so it's not really a solution for our case for now. The second option that I was listing is Chromium. Chromium, as you know, is the core of Google Chrome. Very powerful. It has a lot of features. It implements many web standards compared to alternatives, but it also has a quite un-flexible architecture. It needs to be used as a whole, and it doesn't provide a stable API to build your own flavor on top of it. So, you end up having to fork Chromium, and this is a serious issue because, I mean, you need to make sure to be sure that you want to do it, because it's a very fast-moving project. Unfortunately, it requires a lot of resources to maintain the fork and to stay close to upstream. There are some interesting solutions that are trying to build something friendlier for embedders on top of Chromium. One of them is CEF, Chromium Embedded Framework, and the other one is Qt Web Engine, which is how Qt, the graphical toolkit, provides kind of a web view for putting web content there. Both are interesting, but they have some issues. In general, they are not very, like Chromium is not particularly optimized for very low-end devices. It's not like the main target for Google and for the community. Also, things like Wayland, which can be very interesting for embedded device manufacturers, are not really yet very well supported for Linux particularly. Also, there are some licensing issues, even for some users. For example, in the case of Qt Web Engine, it's GPL version 3 or commercial license, which is, for some people, a very strong limitation. It's interesting, but not really either the perfect solution, apparently. We come to the third option that I had in the list, which is WebKit. WebKit is comparable to Chromium in many ways. It's the engine that is inside Safari and the different platforms that Apple supports. It's not maybe as complete in terms of functionality as Chromium, but very close. It does have a very flexible architecture. It was designed from the beginning to support different platforms to enable changing components, and there's a very interesting concept that is the port. In WebKit, you can create your own port of WebKit. I will talk more about this later, but the ports provide a stable API that can be maintained upstream as part of WebKit in general, which is very useful for the thing we want to do. The cost of maintenance is less because you are doing it as part of the community. There are already a few ports that are very well known in WebKit. Some are upstream, of course, the ones that Apple maintains, but also WebKit GTK, for example, which is very well known in the Linux desktop. And many downstream ones are maintained in different places. Some of them proprietary, some of them open source, but outside of the AppStream 3. One example is EFL. Qt WebKit is another one. Sony uses their own in their architectures, et cetera. There are many really here, but none of them is really exactly what we were looking for either. They are not targeting embedded, low-end devices, so we decided that WebKit was a very good choice, but we wanted to create a new port, specific for the use case that I defined before. So this hopefully explains why it wasn't needed to create something a little bit new. Now I will discuss further how exactly we created it. So what is WP? Well, first I need to explain, because I guess in the audience not everybody is familiar with the architecture of WebKit, a little bit about how WebKit is structured. So this picture shows very simplified the different components that you have when you are trying to create a browser, which is the application there using WebKit. There's a big part that is called WebCore that is reusable for potentially all the ports of WebKit. And then there are parts which are blue and orange that are specific for each port. So the blue one is the layer that the application developers will use. The browser developers in this case will use to access to all the functionality of the port. And the orange ones are all the connections to the specific libraries in the platform that you are going to use to actually do what you want to do with the browser. And there's of course a JavaScript engine, which in WebKit typically is JavaScript core, although potentially you could use another one. So this means that different ports of WebKit share a lot of code, but at the same time can become very specific to the target platforms that they are trying to work on. For example, in this picture you can see how this becomes more specific for two ports, the WebKit GTK port and the Qt port. So in the case of the GTK port, there will be a JLIF, JObject, GTK-friendly API that you could use to create applications, potentially a browser. And then in the orange square you can see that you use a list of libraries, for example, GStreamer for media content or ATK for accessibility and a list of other things that you need to bind the generic implementation of WebKit to your specific platform. So this was just to explain what is a port, because we are here explaining a new port of WebKit. What are the key requirements of this port? I already mentioned some, but they want to get a bit more complete now. Again, initially we are going to be targeting full screen content. It's not true anymore because WP evolves and it also supports other things, but like the main news case is full screen. You have something full screen, you run a set of applications there which are HTML5. We want it to be fast and lightweight. Lightweight in terms of memory, but also the space you need in disk, or of course the amount of CPU that you are going to be using. And also very important, we want a minimal set of dependencies. We want really to keep it as small as possible in all these different meanings of small. But at the same time, because, as I said before, the embedded devices are getting sophisticated, we need to support almost all the typical HTML features. Particularly we need to support WebGL. We want to have accelerated Canvas, and of course, because this is a demand by every user of the port nowadays, accelerated hardware transitions, CSS transitions, and also video playback, which needs to be accelerated as well. So it's pretty quite a long list of interesting things here. So how we decided to do this, the creation of this new port? Well, we took WebKit GTK as a kind of a starting point, and then we decided that this is a very mature port. It has been maintained for 15 years now. And we want to use part of it that is very stable. At the same time, we want to rethink the whole structure. So we want to remove the toolkit completely, the toolkit layer. GTK disappears. And we want to make it platform agnostic, platform meaning the graphical stack that we are going to use. I will talk more about this in the next slide. For media, we are going to use the streamer, which is almost the obvious choice for Linux. And we use JavaScript Core as the JavaScript engine. We reduce the list of dependencies to a few important libraries. Some of them are there. Most of them are there, actually. And we use JLS for hardware-accelerated rendering. So everything is going to be very connected to OpenGL. The architecture is quite complex. In the box that I mentioned earlier, I didn't get into detail about, for example, the blue box here. But the blue box hides quite a lot of complexity in terms of multi-threading. So in our port, we also implement quite a lot of different processes and threads there. So, for example, there is a process for the UI, a process for the web, which is kind of the rendering, one for the network, one for the storage. And potentially, there could be more. And at the same time, there's a heavy use of threading as well for performance reasons in composition, image decoding, or even in media playback. So this is kind of the key ideas of the architecture. At the same time, there's an even more important one that is the concept of backends. Yes? There's a long story about this, but the short answer is that the Qt WebKit is not upstream. So it's kind of downstream thing. Like main Qt moved to Chromium. So WebKit GTK really developed as part of the upstream WebKit. And it's a more interesting choice today. Okay. So the other thing I was going to say is that on top of these key ideas for the architecture, we came up also with the idea of having different graphical backends. Typically, in WebKit, the configuration will be that you have a generic part and the port part. We have here in this particular WPE port, we have a third part, which is the graphical backends. The main goal here is to have a very efficient way of using the buffers, where we are going to render independently from the specific stack that we use. So that the generic part of WebKit doesn't really care about if we are using Wayland or we are using live GBM, sorry, or native implementations such as the ones in the Raspberry Pi with the Broadcom stack, the Broadcom provided drivers. So basically the backends are libraries that are separate from WebKit, WPE, and you can link them depending on the one you want to use. They will provide the rendering targets and also display, way to display the contents in the screen. For now, we are focusing on OpenGL, but we have also already people from our graphics team looking into how to support Vulkan down the line. So in the coming months, we will be working on Vulkan support as well. If you take a look to the available backends, you will see already a few of them, which are quite mature. So this one called live GBM, which we use when the hardware is Intel AMD with the open source NVIDIA drivers. This is one called Wayland EGL, which uses Wayland internally and we use, for example, when we have ARM Mali drivers. The main one that we are using for now is the third one there, live WPE backend RDK, is called in the repository, and it supports the Raspberry Pi and a few other target hardware that are very important for us. We are also working on an experimental backend for Android, which already works, but it's still not fully public. Okay, so the architecture is a combination of traditional WebKit port ideas with this concept of backends, which make this port a little bit more flexible in terms of what we can support. I was mentioning before that one of the key goals is being lightweight, so I wanted to comment a bit about how true it is. We are using the different Raspberry Pis as kind of the reference hardware. We support many other things, but the Raspberry Pis 0 to 3 are kind of the ones we are using for checking regressions, developing in a way that we make sure that the performance is good. We also use desktop, of course, for main development. Currently, for some configurations, you can have fully working WPE in only 40 megabytes in disk, and in the memory footprint when it's working and rendering relatively simple web applications is lower than 100 megabytes. So we have customers that are using these devices where they have 200 megabytes in total, maybe 100 for the OS, 100 for the web applications, and it works. And we can play things like YouTube TV fully supported with all the functionality required there in the Raspberry Pi 1, even in the 0 with some limitations. So it's quite lightweight. Another thing that I wanted to highlight is that we are putting a lot of effort in media support. This is because the main use case of WPE at the beginning was media playing. I will discuss this a bit more in a few slides. So we are being very careful with having hardware accelerated decoding. We use the streamer for that, so it's quite powerful. It brings a lot of functionality already. And also hardware accelerated video rendering. Again, because we want to support transformations on top of the video, we want to support modifying the video with CSS. We, in very specific cases, can use also external rendering, which is not ideal, but can be used when you really want something very powerful in a very, very low-end device. We are working hard in supporting these three standards. MSC is fully supported already. MSC is these media source extensions that enable to complement the behavior of the video tags with JavaScript. And it's used by many well-known content providers, including YouTube as well. So we passed the conformance test 2016. We are working on the new one. We support fully MP4, and we are working on WebM so that we can enable also VP8 and VP9. At the same time, we have a team working on media, on encrypted media extensions, EME. The version one, so-called, is already supported as well, so you can basically buy content in YouTube and play it. And we are working on what is called version three, the latest one, which has a better architecture, is object oriented, uses promises. And there we want to support the different CDMs, including Play Ready and Widevine. In the version one, we only support Play Ready. So we are working on the open source part of this, of course. EME defines how the open source part needs to talk to the proprietary software here. And the third one that we are also working hard on is WebRTC, which is also a priority for our users. We initially created the prototype using Open WebRTC, which was just email-based. But it has some limitations, because Open WebRTC is not really well maintained nowadays. So we decided now to start using WebRTC, which is the same library that Chromium and Firefox are using, originally created for Chromium. And we have already a prototype of this working, and we are adding features on top of it. We are in collaboration with Apple, which is also planning to use this for their ports in WebKit. So this is strong focus on media, again. And this finishes the second part, which was highlight of the application of the architecture of WP, the main ideas behind it, the backends, and the strong focus on media, trying to keep everything lightweight. So now the question is, how is the port doing? What's the status today? So I want to go a bit back and understand a little bit more about the history of this port. We started the project in 2014. It was an internal experiment trying to use all the knowledge of being working in browsers for the last 10 years. After that, we understood that it had a lot of potential, and we put a team, a permanent team, working on it for the last two and a half years. And in May this year, so a few months ago, it became fully integrated in AppStream WebKit. So now it's fully accepted open source part of the project. We have a stable team working on it, a pretty big team, and we have a community that is growing. So we have now external contributors, other companies that are using it, other companies that are contributing things. There are also other companies that are even taking it and creating their own proprietary solution and eventually contributing some things back. And functionally, it's quite complete. It can be used for many things. There's still things that need to be improved, but it's really quite stable and quite mature. Actually, I can talk a bit about adoption. I mean, this is a port that, even when it's down as an AppStream port, it has been developed for a while, and it's really used by some companies. A big part of the work was initially and is still now sponsored by Metrological, which is a media company that is a provider inside the RDK consortium for companies, big companies such as Comcast or Liberty Global. And they basically use WPE as one of the key pieces in the architecture, in the platform that they have for set of boxes. And WPE is already deployed in more than 10 million set of boxes by these companies, mainly by Comcast, but Liberty is also starting to deploy it, and the number is growing very fast. At the same time, although this was the initial use case, the port has proven to be quite useful for other examples, other embedders. And we have seen in the last year a lot of companies coming to WPE and deciding to use it for things like elevators, speakers, vending machines, cameras, printers, a lot of different use cases where you share this idea of having a touch screen where you want to put some HTML5 applications, and you have hardware that is not extremely powerful. This means that we have been adding support for new hardware in the last months. You have seen already a few backends, and they are being more and more complete in terms of supporting new hardware. But still, there are cases that we don't support, and we are still working on increasing this list as much as possible with new backends or with making the available backends more complete. And I wanted to talk a bit also about where we are putting the effort nowadays, which areas are the ones where our team is working harder, so that we complement what we have with new stuff. One of the things is releases. Until now, the port has been developing like crazy, preparing for upstream, so we didn't put a lot of effort in making stable releases. Now we are doing it. We have a team preparing the first release. Now it's coming out in a couple of weeks. And we will be doing this every six months, stable releases every six months with intermediate releases in the middle, in a very similar way to how the WebKit GTK port does this. For now, it will be kind of preview releases. We are not going to commit to fully stable APIs, but after a while they will be fully stable, and we will guarantee backwards compatibility for future releases. So this is one thing that for us is very important. The second one is improving the QA infrastructure. There's a very strong QA processing in WebKit upstream. There's a huge amount of tests. There are buildbots. There's continuous integration, of course, but we want to extend this more to all the supported hardware. We want to really make sure that when we do a new release, things are still working, and what is even more difficult to prove sometimes there are no performance regressions in all the different platforms. So we are building a farm, basically, of embedded devices where we are going to be testing all this continuously. Hello? Hello? Okay. Hello? Okay. Let's use this one. Okay, it's a QA. Another thing is the documentation. Documentation, again, it was not very complete. We are working in architecture documentation, API documentation. There will be, in the coming weeks, a project website coming out with all the details. And then in terms of technical work, we are also heavily working in those six areas that you can see there. The first one is we are adjusting some things in the graphical architecture because we want to make sure that we are competitive all the time with all the improvements that, for example, Chromium is doing. So we are trying to simplify some of the layers, potentially replace some of the libraries. We will see. There's a lot of ideas that we are starting to work on them. Of course, we need to keep working on EME, MSC, and WebRTC. As you saw before, they are very important for us. We have some support that we want to support the new versions. And it's a lot of work, really, to make all that work properly. We are defining a plan for the first and anything to improve the networking stack, particularly Libsup, is some issues. We are going to take over the main ownership. We are going to improve it. And we are going to improve some things in the security and the browser side, including better sandboxing, for example. We have started already working in some other standards. For example, WebDriver is already fully supported. Very recently was added. We are working on WebGL2, full support. And we are even experimenting with what we are implementing in the API and doing some optimizations so that you can prototype VR things using this port, as well. Of course, I mean, modest things compared to maybe things that will be running in a very powerful hardware, but still useful for some of our users. Another very important thing is that JavaScript Core is really focused on 64-bits, but for us 32 are very key because some of the hardware we are running things on are ARMv6, ARMv7, even MIPS. In set.box, MIPS is very popular. So we are putting a team of people working together with Apple so that the support for 32-bits is as good as the one for 64, which is kind of still not there nowadays. And finally, we are working more on the Android prototype, which opens new doors for the port. It's not kind of top priority, but for us it's an interesting experiment that we want to continue doing. And with this, I come to the end of the talk. Everything I mentioned here is open source, fully open source, and also fully developed in the open. We have basically two repositories. One of them is the upstream WebKit repository, obviously. So if you get WebKit there, you can build everything. But then also sometimes people ask about why, but we have another repository that is kind of a downstream where we have a few things that don't really belong to upstream. One of them is the very specific hacks for set.boxes and some hardware that we are using there, which is still open source, but kind of ad hoc, so we don't want to mix it with the pure upstream implementation. And also a few other things that we are kind of experimenting with, and we want to be really freely experimenting there. So we have some branches there that are playground for us. And then eventually many of them become upstream after a few weeks or after a couple of months. It depends. So if you want to check this out, you need to take a look to the two repositories. We will probably mainly use the upstream one, but we can also take a look to the other one. Maybe your use case will take advantage of the set.box specific things. And of course, collaboration is welcome. We really see this as a very open project. We welcome companies, individuals, and people testing new hardware. If you want to check this, also this, like, live, we have a demo in the booth area. We are going out to the left, to your right at the corner, end corner. And we have a couple of demos there running on Raspberry Pi 2. You can see YouTube TV fully supported. You can see how WP does transformations in a full HD video in a quite soft way. I think it's interesting so that this complements what I mentioned today in the talk. And this is it. Thank you very much. And if you have questions, I'm happy to answer. Yeah, I think I need to pass the mic, so I will be moving around. I have a couple of questions. First, how many people did initial port? Second, have you got MetaLayer for Yokto? And what's the state of IAM6, for example? Okay, so three questions. The size of the team varied a lot. In total, we are 16 in the company and about half of the company is directly or indirectly working on WP in different layers. Probably 20 of them are working on the port itself. The other 10 are working in the core things, adding new standards, working on some media-related things. We have also people working on Gstreamer, which could be counted. So something like 30 people, but fully working in the port, maybe 20 or 15, 20. The second question is about Yokto. Yes, we have recipes upstream for Yokto. We try to make things easy for people using Yokto and other different projects that are similar to Yokto, so most of them are supported. You have different recipes for different hardware, so it should be easy to test this. And the third one, about FreeScale. Actually, it's one of the hardware, specifically IAM6, solo and I think it's called dual lite. Those two, we have been using them for a couple of customers and they are supported and they work pretty well. We have been testing the performance there. Our customers are using this for quite sophisticated animations with CSS and things work pretty well. Yes, so you can check it out. There were other people, I think. Okay, I think they were first. Then I'll come here. Just wanted to know what kind of licenses VP is using. Okay, so licensing. We are using the same licenses that WebKit uses. Which is, if I'm not wrong, GPL version 2 and some parts are BSD. So like the specific parts that are port specific because we inherited this license from WebKit, GDK are GPL version 2. The core parts are BSD. So it's basically a quite permissive licensing. And you were next, right? How complex would it be to add web USB or device drivers for peripherals? Okay, this is something, if I understand correctly, the question is about device APIs and something like that. We have been experimenting with that for automotive, for example. So you need to basically connect the lower layers to JavaScript and offer an extended API. This is not very difficult to do. It's probably not something that will be long to upstream because it's a specific use case. But technically it's not very difficult. You have to connect the callbacks basically that you want to call in C to the JavaScript layer using some artifacts that are there already in JavaScript core. So our experience is that it's something that can be done quite quickly. I think you need to use this because they are recording. This was what we were trying to do in the case of automotive was accessing some sensors and some data from the car, basically. So this is not, you cannot use a W3C standard API for this. But there are automotive specific APIs that you need to implement. So you need to expose to the JavaScript application this API. And it's, for example, knowing the speed the car is moving at or even interacting down to the car and say, okay, please accelerate. This kind of specific APIs. Accelerating for JavaScript, right? Interesting. For the back ends, is that build time selection or can it be selected at runtime? For example, you have the Wayland and basically the metal version. So could you select at runtime which one you use? I'm not sure about this. I think most of the work we have done so far is at building time. I'm not really sure about the limitations for doing it the way you were explaining. You can take it maybe after the talk and discuss it. Anybody else? Okay. Thank you very much.