 Hi everyone. Thank you for coming once more to foster them for a new open media room. This year we start with the most currently talked every year, but nobody can talk on VLC this year, we talk about VLC 4.0. Thank you. Hey, do you guys hear me? Okay, thanks for coming. Some of you have already seen part of it. This talk is a continuation from last year where I came here and I said, this is what we're going to do and this is the following talk. This is what we've done. I'm sorry, this slide is a bit tricky. So just a few stuff about VLC 3.0 was veterinary, 18,000 commits on the core and around 3,000 on each of the ports. It was a bit long to come, but it was very good because we refactored the infrastructure and the architecture so that mobile and desktop had the same sharing core. And finally, we activated hardware decoding absolutely everywhere. Quite a strong release, quite difficult to stabilize because of hardware decoding and drivers who are shitty. And for once it's not only on Linux, but they are shitty everywhere. Especially on Windows 10. Highlights of Windows VLC 3 was already coding for 60 video, common con base, 10 bits, the early parts of HDR, 12 bits, HTTP2 and correct path through audio and not the usual hack. We've been talking in the last 10 years on VLC that we've finally fixed. A bit of Wayland and a new subtitle stack. OK. Download numbers. Hello? Yeah, sorry, the projector is a bit... Yes. We've already done more than 200 million downloads on our website. Of course, it's the usual outside of Linux distribution, outside of Android, iOS, outside of download.com and other website that improved the experience by adding toolbars. And that's 89 million on our website for the 3.04, which was the time the one who stayed six months on our website. Yes. So the core of the talk is about VLC 4.0. This is exactly the slide I was last year. If you want to troll me, you can go and check on the website from last year. This is taken copied from it. So next release is called Auto Shrek, which is a vampire photographer from this world. And basically, so I said new playlist, media library, work on the interface, new video output architecture, VR3D, and dumping old platform. And basically we've done almost everything of it. New playlist, clock, media library, interface, new video output. Well, basically everything is done. So I'm going to present exactly what we've been doing. So the first thing is in VLC 3.0 we had the 360 support, which is basically all those videos and that you can use on a normal laptop and not by installing some very complex software to do that and works everywhere. You can touch, you can use your mic, you can use your keyboard and your mouse and move it around. That is, of course, a queer rectangular, a cubemap projection. It's only in OpenGL everywhere except on Windows where it's using Direct3D. At the same time, we did also everything related to 3D audio, which is ambisonic. So we wrote an ambisonic rendering and a binaural rendering stack for the audio part and that works more or less in 3.0. In 4.0, though, people have been asking us to go to the next step. And the next step is to support the VR headset. Because VR is almost dead now, or soon to be dead, that's the right time for us to actually come and take the market. A lot of the work has been done by the OpenHMD team who has been reverse engineered all the headsets because else you need to install a 250 megabytes SDK and 200 megabytes unity to do anything on VR, which is horrible and annoying and doesn't work on Linux and doesn't work on Mac. By directly discussing through the USB wire to activate all the features of the headsets, you can do that directly. So you will have a normal VLC. You put the headset on and it just works. You don't have to do anything not to install anything and that works. This is us at the IBC show where we were demoing with people coming and putting the headset and trying. I think it was like one Vive, one Oculus, one PSVR to show that it worked everywhere. So this is going to be in VRC 3.0, 4.0, sorry, and of course support will come on Android also. We might merge. It's not completely sure about the 3D part, which is everything related to all the NVIDIA glasses, but once you're doing VR, doing just stereo is quite easy. Input. So we've done the work, well, they've done, the work on the input manager. The problem in the past is VRC had the playlist and the playlist was doing absolutely everything from managing the audio output to managing the video output but also doing service discovery, browsing on UPNP and so on. It was a monster and it was never finished because the people who worked on that basically left the project while in the middle after we rework and for the last 10 years we've been fighting against that. So we decided to rewrite a new input manager who's basically holding the input, the inputs, and also holding the reference to the audio output and the video output. This is quite important because that's what is going to help us to get gapless, right? So you can have actual gapless in audio, maybe in video, whatever that means, but at least you have stuff that you will keep around while moving from one input to the next input. Gapless audio is something that people have been asking since, I think, before I joined the project, so that could be 2003, and we are probably going to finally deliver that. Also, the libVLC, which is used by Android, iOS, and a lot of people who use VLC, like the phone on backend, also did not have the same playlist, had a different playlist because it made so much sense to have different playlists and competing code. So all that is being merged and we're really separating the input, which is just knowing the next, the current player, the current media, and the next one, and the playlist with an actual playlist. And by an actual playlist, I mean something that is flat, you know, with one next and not the horrible tree in VLC playlist, which was amazing because in fact there was only one tree, but also at the same time a flat representation, which is, of course, an amazing design that almost none of us ever understood correctly. So, and also we are making that the same for the VLM, which is a multiple input for VLC, for the libVLC, and for the main VLC. So that's going to simplify a lot. And also it has a nice feature, which is now we understand that code base. It's sad, but it's true that in the VLC core there used to be some stuff that we did not understand or had difficulty, well, maybe except Remy, who knew about them, but anyone else couldn't. So now we understand more or less this code base. Clock, okay. So the current clock of VLC is based on the input PCR. For those who don't know what PCR, a very simple way is basically having the clock of the encoder. So when the encoder, sorry for the people who are very precise, it's not exact, but basically the encoder, when it codes, gives its own time, and then you can synchronize on the time. It is because VLC used to be just video land client and just a player for TS, stream live, and you care a lot about that because you want to be in sync in your encoder because you're doing live. If you're getting too slow, then you're going to accelerate and that's why you always see VLC re-sampling the audio or delaying some video frames. And people are annoyed because when you're basically playing a normal audio file, you don't care, right? I'm not playing the television that you used to at the École Centrale Paris when the project was started to play a TS stream. And also because that was very focused on TS, we basically modified all our demuxes to fit this model and it was not always a right idea. So the idea for 4.0 is to change that. So we went and destroyed the clock. And I think there was like a joke that no one there touching the clock except Meux and Sam. But this year we finally did it. So the idea of the clock, the new clock, is to have a main clock and then in the core that is basically doing the timings and then depending on the pluggable clock system, you're going to modify this monotonic clock given by the system in order to accelerate or not accelerate. And you're going, for example, you're going to be able to say, well, my master clock is going to be my audio because you're playing a video, a local file. Then your audio is the most annoying part because it has SPDIF and because you have a shitty driver that doesn't have the same clock as your system. And so this you can say, well, this is now my master clock. But for everything related to v-sync or g-sync or free-sync or whatever the last AMD and NVIDIA marketing terms, you might also want to basically have your master as your video output. Or if you're a professional and using SDI, maybe your SDA clock is better than all the other clocks and more important to be correct. So that's why basically you're basically asking which of the clocks are the master clock and then it gives this difference to the main clock and this main clock is basically driving the others. And all the other outputs are for the slave and are asking the core how to transfer the PTS to real time. So this is going to help us because it's mostly going to give us, of course, no resampling all the time, better synchronization, v-sync and so on, but especially frame accuracy. Because basically the old clock was around the input, right? And then in order to know what are we playing, then you have the whole pipelines and codecs, which of course all of them are instant in use, and video filter and video output. And of course this happens in no time, right? Well, no, exactly. So the problem is that the clock was around there. Then we had to basically estimate the delay of the PTS around it and then add that, which is a problem because that means that VLC was never completely frame accurate. Of course, it was usually one to two frames, but it's not completely frame accurate. Now that we move the clock to actual asking the timestamp to the output, from the output, then we can be frame accurate and get those SMPTA, time code and so on. I think a lot of those professional players from Panasonic and Sony, who are basically 2,000 euros each, are going to be very unhappy about that because I think we're going to kill their biggest selling point. Okay, video output. Currently in VLC, basically we take the decoders, we take the frames from the output, asking as the output, the video output, what do you support? And it's going to say, well, I only support I420 and V12. And then we went, we took the, we did a picture pool there and then we went back to the decoders and to all the filters. That makes a lot of sense in the 19th and the 2000s when your graphic card was a piece of shit and you were using X-Video that only supported one format. Now almost all of them support direct 3D, open GL, shaders and so on. So it makes less sense to block the whole pipeline from the output. And that's why we move from a pull model to a push model where basically the decoder are pushing the frame to the next level, which is going to be the filters who are going to push it to a different filter and to a different filter until the output. It makes it way easier to manage. It looks exactly how we manage audio and also it's going to help for recycling the V out when you move from one input to the next input because you might be able to keep the same surfaces and just add a filter. And of course we're working on more IGR. And when I mean we're working, I mean LiplaCibo guys are working. Niklas, I don't know where you are. Thanks. Okay, Media Library. So the Media Library, we have a new C++, I'm saying new now for the last two years, but we have a new Media Library in C++ that has been today used on the Android port of VLC. This Media Library basically index what you have on your phone. And now we're bringing that to iOS and Android, to iOS and the desktop. This is the tablet version of Android. Small technical details, it's in C++, it's by CQ Lite. It's quite simple and light compared to other Media Library. It will not do very, very complex stuff that you have on some media centers. It indexed audio and video, shows, playlists and it also indexed your UPNP shares or DLNA or SMB or whatever you, maybe NFS for you guys, probably. Okay, but having a Media Library brings a problem which is we need to have a new UI. All have been complaining of the UI of VLC since forever. Usually the UX is okay with VLC because, well, you don't interact with it, you double click on your video file, you do two modifications, just works. However, the UI was, oh, it's too old and so on. And also like GNOME 3, 4, Plasma 5 and Windows 10 have changed a lot the way we have interfaces and people seem to not like VLC anymore. So we're working on that. There are two main use cases for VLC. The first one is to play from Explorer, Finder, Notelus or whatever you want. Dolphin, of course, for the KD default. And basically you don't care, right? You just want the media player, right? Or if you're on command line, you just want to see VLC and just play and... But there are lots of people who are basically using VLC to play audio or to play TV shows and to do playlists and so on. And they work in the other way around. They launch VLC and then they open. And that's quite a more common use case that we thought it was. Around 50% use this way of opening VLC. And for them, we need a media library and we need a nicer UI. So what I'm going to show you is a screenshot. I don't think it's been shown anywhere yet. I'm just going to not scream. Okay, so I'm sorry guys and girls. It's very technical. I'm going to show you a screenshot of UI. Do not worry. You can still use a command line, right? So most of the UI is down in QML, which is basically Qt5. This is what we have currently. So there's a video player. You even have transparency at the bottom of the controller. Wow. Welcome to 2005. But basically that's what you expect, right? And it fades. There is not much chrome around. Same for the playlist that can be. And most of the people are just going to see that. It doesn't, you don't see a menu. So the GNOME people are going to be happy. I think we need to remove some features so they are very happy. But we're getting there. This is just like the grid view, right? So when you open, there is the video that you've been indexing. A lot of green videos and very interesting mirrors at the bottom. But I think you get the idea. And at the top, you have music video, network, and maybe internet radios and so on. Audio. Well, you can have albums, artists, genres, tracks, you know, something. It's very simple, right? But just what people want. And when you click on the momentary laptop region, one of the best album, of course, then it just opens a bit like Lollipop for the GNOME people. And you can basically click. And then you have the playlist on the side. Okay. So that you don't scream. The media library is optional, okay? So we are not going to force people to index their drive. You can disable it. You can select which folders. And if you double-click from your GNOME, Notilus, or what's the name now? It's probably called files. From your files or finder, it will not launch the media library, right? So it's just fast. And I think that 4.0 is going to be even faster to launch a movie than VLC 3.0. I showed you without menus. Do not worry. You can activate the menus if you prefer menus, right? We don't care. Do what you want. If there is an option, we are not GNOME. There will be some GNOME-specific and KDE-specific adaptations so that it looks like a GNOME application or a KDE application. That's very important. And yeah, if you care about CSD and SSD, which is client-side decoration and server-side decoration, we don't care, right? We will make it so that it looks native on your platform and on your own religion. We don't care. It's going to work on Linux on Wayland X11, on Direct 3D 9 for the people who are still on Windows 7, and 11 for people who are on Windows 10. Those are mock-ups for the Mac UI who are a bit less advanced than the Linux UI. And this is the UI for the next version of iOS who takes the same ideas and the same media library. Yes. More features. In VLC 4.0, Wayland actually works. The screenshot you've seen were taken on Wayland. As in 3.0, we had Chromecast. People broke our balls for two years about that. We are adding UPnP rendering. So any smart TV, anything that just understands UPnP as a client can be basically sent in the same way. It's going to be the same UI because no one cares if you're using Chromecast or UPnP rendering. And, of course, we're going to have AirPlay, which is basically AirPlay One, to do exactly the same. You don't care what protocol it's under running. We added a lot of other support like Dash WebM, TTML subtitles, images, which is horrible. AV1 encoding, we have web VTT encoders, especially for Derek, of course. We moved the SDI output from video output to stream output, so we are going to support way more stuff on SDI. We support SMBv2 and 3 thanks to a new library, and we support the RISP protocol in and out. On Android. Which? SMB. Yes. And iOS. And I promise that we were going to drop platforms. This is exactly what we did. So we are dropping XP and Vista. We are dropping macOS 10.7. I said that. Sorry, I said that we would drop from 10.7 to 10.9. We're actually going to drop 10.10. Sorry for those who care. And, of course, on Linux we're now going to deprecate everything related to X-Video, so you need OpenGL. Android, well, we're going to drop 2.2, 2.3 and 3.0 and 4.1 and so on. We will require Android 4.2 for that. And iOS 7, 8, 9, maybe 10. And I have three seconds left, so I got five minutes for questions. Questions? Any questions? Any clever questions? At some point I tried to capture old VHS video from a capture card, and it was quite horrible and hard to use the UI to understand exactly what to do from a non-professional person. Yeah, yeah. Is this easier now? It's one of the things. So one of the parts with a new media library is that you're going to be able to, well, on the network stream it's basically going to define and find all your network shares, but it's also going to show you all the capture device directly there, and so you can click, and it's going to be easier to do than the horrible Control-C Capture, Open Capture, that is very difficult to use. So yes. You need two mics. Sorry. I got two. There is one for recording and one for the room. I have a 360 camera. Yes. Which one? The Teta? No, it's a $100 one. The Chinese one. Okay, sure. Yeah, indeed. It makes up files with two rounds video. Yes. One for each lens. Can VLC transform that? No. No. We cannot because we don't have such cameras, so please give us the camera and explain how to do because it's basically two emisphere and there is an overlap and there is metadata in your file to tell us how to do the overlap, but we've been asking people to give us the file so we can look at the metadata and no one gives us the file, so please keep us on contact and we will support that because it's absolutely trivial to do. Thanks. Other questions? I don't eat people, ask questions. Well, only in the morning. Would you like to get the cameras donated? Yes. I would like cameras, I would like any old SDI equipment. I need old equipment because it's always useful for two people, for us to support it, so yes, please give us hardware. I also take old phones, old Android phones, all iOS phones, everything that is old except your dad. You mentioned HDR support from VLC, but is that on all platforms? Because as far as I know, Linux does not support this. Yes, so it's not. So in VLC 3.0, but mostly in 4.0, we got two modes. The first mode is that your platform does not suck and gives us a way to actually control the HDR metadata and so we talk to the video output and we send that. My understanding is that so far it works fine on Windows 10 for some definition of fine because you're on Windows. It works fine on Android, especially the Nvidia Shield, basically. On Linux, I understood that in Wayland, there is a potential support with some Wayland extension, but I think it's Soso. So as soon as there is a way to do that, Soso and some people from Wayland help tell us what to do, we will support that directly. In all the other cases, which is macOS, iOS, old Linux and old Windows, we basically tone map using Leap Placebo to do the tone mapping. There is like 20 different algorithms, which are of course not documented in VLC, but we should pick the right one by default. On macOS, when we are going to move to Vulkan output, we will use a molten VK to be able to activate AGR on macOS and on iOS, but my understanding is that on macOS and iOS, it only works with internal screen and not external screen. And on X11, I think it's dead, so not going to happen, but if there are some people from Wayland, we are very welcome and we can discuss with them, really. There was a question here.