 Good afternoon. My name is Ahmed Radasekat. I came from my university in Paris. And I work on the G-PAC team. For those who don't know G-PAC, I will do a quick presentation. Then I will talk about the key tools in G-PAC and the latest news. And then I will talk about the delivery of VR 360 videos using tiles. And I will show you some demos at the end. So what is G-PAC? G-PAC is an open source multimedia framework. It's under the LGPL license, a host on GitHub. And it can be used in all desktops, mobiles, iOS, Android, and embedded Linux. So what do we have in G-PAC? But basically, two sets of tools. One tool, which you may know, mp4box, which we are happy to call it a worldwide reference for mp4.5 manipulation. It does mp4.5 manipulation for mp4.5, but not only. You can do an encryption of your files. You can concatenate files. You can do a segmentation. We can also add streams, remove streams, add items, remove items, and things like that. We have also a player, which is a bit more than a simple audio video player. It's a half way between a player and the browser. We have support for 3D 2D graphics. We can use any protocol that you want, including HTTP. You can use almost any codec. We use FMPag. We have support for open HEVC and Scalable Extension. And I forgot to say that G-PAC is a research oriented project. Here we have some latest news. So now we have a public test infrastructure. We have support for energy extensions for EVC and HEVC. We've improved the TTML support. We have now the support for hardware decoding in OSX, iOS, and Android. And we work into a branch to support more PIFF and smoothing streaming files, file format. And hardware acceleration in encryption. We have also a lot of interesting projects, which you can check in our GitHub. I will talk to you now about the delivery of VR360 content using ties. So as you may know, to play 360 video, it requires at least 4K. But some claim, 20K for achieve 4K field of view resolution, like we see in the illustration. And to achieve this, we have the necessity to reduce the bandwidth. We can do this by acting on the compression, can do a compression of 2D video after some projection, or some shuffling and packing. Or also we can act in the delivery part. We can deliver a part of the video based on a viewport. And put a low quality outside the viewport. But here we have the necessity to react quickly to motion. Here we have some examples of projections. We have the simple one, a quick rectangular projection and a quick rectangular projection with specific packing. We just take the top and the bottom of the video where we don't need a lot of precision and put it on the top. We can do also a cool map projection with packing. So we think videos will probably be packed and compressed based on a rectangular region. And here's the test for tiling. So the principle of tiling is quite simple. We just cut the video into different parts. And we have the possibility to have different quality in each part. We can also have the possibility to play just sometimes with just only one. So in MPEG-DASH we have a notion called SRD, which represents the 3D relationship between videos. It's in the source content. It's code agnostic. There's no assumption of tiling coding tools and can be used with multiple independent videos. It's already be used for projected videos. But there is some discussion to extend it to 3D relationship. In our player MP4 client, we have support for two different adaptations. Here we can see using our GUI. We can see here different tiles. In this adaptation we have actually, for this example we have actually nine separate videos, which means nine decoders, as we can see here different buffers. And we have also another adaptation, which is called HVC motion-contrained tile-based adaptation. In this adaptation we have just one video, so one decoder. And here we can see different tiles, nine tiles, with different SRD and qualities also. So for this second adaptation, after tiling we encode the video using a motion-contrained HVC encoder. And after encapsulation we have the tile tracks and the tile-based track. After that, you can generate an MPD using an MPD generator to have the MPD file and the MP4 tile segments. Then you can play it with our player, for example. You can use in HTTP requests. You can choose any tile that you want to play, as I said before, or just one tile. Using this link you can have the whole description of this adaptation and how to encode, how to create files, how to play files, et cetera. So as we don't have now a standard to choose which tile to FABU, we have some streaming strategies, configuration streaming strategies that you can configure our player and MP4 client. We have a lot of strategies. Here we have four, uniform priority, row-based priority, center-based priority, or current-based priority. So as I said, we don't have a standard. But it can be used with a guest tracker or a head tracker, for example. Then after this, I will show you a demonstration where we do an adaptation in function of the viewport. So we will have the highest quality on the tile visible, the visible tile. And after that, I will show you a demonstration using demonstration of the implementation of the hardware decoding using MediaCodec. In Android, so here in MP4 client, we have this beautiful tool. We can check which decoder is used. We have the possibility to change the bandwidth. We have also, we can see the qualities, different qualities. And now here we have the adaptation. As you can see, the quality will change, it's not very visible, but the quality changes in function of the viewport. Here we have a demonstration. Actually, we are under just the tile which should be visible. And we can see also the adaptation of the quality in function of the viewport. Here it was very, very visible. So we don't render the other tiles. And here we have the demonstration of the MediaCodec, the difference between the MediaCodec hardware and the software decoder. We use FMPG. We can see here that the FS does not exceed 16 frames per second. We have a lot of drops. And one frame can be decoded in more than 100 milliseconds. But in the other hand, we have the MediaCodec, which can decode one frame in 20 milliseconds and have a constant FPS equal to 25 with no drops. We can see also the cut in this video. And it's quite smooth in the other one. I forgot to say that it's a 4K video resolution. Thank you. And I will be happy to answer if you have any questions. I have quite a few questions about the date. So do you think it's going to become part of the dash standard anytime soon, like this small description piece for these things? I didn't actually understand. So it's not part of the dash standard yet? Which part? Like this tile? Yeah, the tile's part. Do you see it becoming part of the dash standard somehow? I think it's already in the standard. HVC, tile in HVC is, I think. So with the Facebook pyramid format, can you repeat? Your question is, if we check the statistics using the kubemap projection, no, it is not that. It doesn't happen. When? You said that you were using motion constraint in HVC and kubemap. Yes. So the question is, did you measure any efficiency improvement? Why constrain the motion vectors? Did you see any improvement? Yes, actually, the question is if we did some statistic to see if we have any improvement using tile in encoding videos using the constant tile, constant motion tile. Yes, we did this. But there is no a lot of difference. But you just need to use 25 tiles and it's quite good. Yes? How much bandwidth can you save to get a similar 4k rendering? The similar? Yeah, I'll talk in question. How much do you gain? I don't know. But how much we gain if we render just one tile or just? No. I don't know. But actually, surely we'll gain some bandwidth. It's not really to benefit in compression efficiency. It's more to optimize the viewpoint according to the bandwidth you have, right? Yes, yes. Because you're not always seeing the full 4k picture. Yes, you don't? You're just seeing a lot of the picture. So instead of encoding the full picture, you just encode a part. And then you use actually the bandwidth you save on the other tile to improve the quality of the view you are seeing currently. So the tiling is not necessarily to include the overall quality of the 4k picture, but it just to optimize the quality that you were seeing when you were actually seeing the 360 video. So you're seeing one tile at a time, not all the tiles at the same time. But in the encoding, we encode all the tile with all the queries. It's just when we play the different tiles when we do the adaptation. If we support the question is if we support using MP4 client, if we support all the devices. We can actually, in Android, we have the MiltiView support. Yes, and we can use it with the Google Cardboard. Actually, we can play all the we support the HVC MiltiView. Then you can use to play a MiltiView file used inside by side, for example. Or you play it in a 3D screen. We tested it with MP4 5 with 5 views. And we can see we support it, actually. The question was what is the difference between the first adaptation and the second adaptation? Adapted a video player, use tile with a lot of videos, separate videos, and using it with just one video. That's the question. Yes. The current algorithm, the question is which adaptation algorithm we use for this using tiles. So here we check just which tile is visible. And we have configuration, as I said, to choose which quality to use. But we will soon support Bola algorithm for adapting videos. As I said before, yes, we support. The question is if we support MiltiView videos, MiltiView files. Yes, we support it. We can play it. We have two streams in one file. Yes, we have two streams in one file. And you can play it, as I said, side by side, or choose which view you can use. Thank you.