 So we're here at the SID display week 2019 and who are you? My name is Roger Barker. I'm a product marketing director for Arm Ltd. And right here at the show you're launching the Mali D77. That's right, Mali D77. It's our new display processor that we're aiming specifically at the head-mounted device market. It's got some extra special secret sauce in there to deliver a synchronous time warp, chromatic aberration correction, lens distortion correction, all in hardware in real time to deliver a smoother experience using far less power and delivering a better experience for users in time. So these are some of the things that it works to fix. So a synchronous time warp, lens distortion We're able to compose multiple layers up to four VR layers at one time. I have to move out of the way. Yeah, and so all these have never been done before in this kind of way that you're doing it. That's correct. In the past this has all been done on the GPU. GPU needs lots of horsepower to actually carry this out at high resolution at high frame rate and high horsepower means it's high cost because it uses a lot of power, uses a lot of energy and it needs very expensive GPUs to handle that at a high performance level. So this stuff, those algorithms have existed before but they were just done on the GPU. They've not had a dedicated IP to accelerate this. That's correct. So that people have slight, they all have their different variants on the algorithms but we've developed an algorithm that can run in hardware and we can do that in a single pass through the display processor which enables a far better experience. So that means it's going to be part of the Mali IP or it's like one of the part of the GPU? It's a part of the Mali display processor. So the Mali display processor should be used alongside something like the Mali GPU alongside video, alongside CPU. So we have a whole multimedia suite so we can provide all the components or for people who don't use our products, they could still use our display processor. So ARM is like really good at getting lots of different IPs in there, right? In the SOC, there's all these different things going on. There's a and it's been a while that ARM has been doing the display part two. We've been doing this for about six years now. We bought a company called Evertronics. We were based in Poland in 2013 and we've gone through about four iterations of products now and we're now on to a new architecture which we introduced two years ago, which we call Comedia and the D77 is built on top of the Comedia architecture. Is it also related with the apical acquisition? It's it works with the apical IP called Assertive Display. So at this stage we still have separate IP for Assertive Display and for the display processor but they're optimized to work well together. So when we designed Comedia architecture, we'd already acquired apical and we were making sure that all the interfaces both hardware and software was as smooth as possible to give a better experience. So AR and VR has been a buzzword for a few years now, but it's ramping up for the next level. You want to have it for it to be really really successful. It needs to be huge resolutions, huge functionalities and this is really important for this. So this is what we're seeing is that past or the initial experiences people have had with VR haven't been as good as they'd like and this has partly been because the processors weren't able to process enough bits at fast enough frame rates to enable a smooth experience and we've identified specific areas that we thought we can improve in the display processor to offload the GPU to allow it to spend more of its time, give it more horsepower to actually improve its delivery of the experience, goes to the display processor and then that manages that all the way through to the display. So when you talk about 15% overhead on the GPU performance, is that the stuff you're doing in this IP? Yes. So as well as the 15% performance uplift or performance freedom, I guess if you like, we're freeing the GPU to focus on what it should be doing. The other key thing here is because we've freed it up from doing that, it isn't forever or constantly being interrupted and having to switch context between developing the game, creating and rendering what it needs to for the game and then supporting the head mounted display. And that's the key part of this. We're stopping it being interrupted so you can guarantee the flow or improve the guarantee of the flow through the GPU and then through the display, we can guarantee that we can hit the frame rate of the target display. And you have some other stuff you're talking about that you just did a long presentation and some what's the time warp about? So time warp is a method of reprojection to mitigate the display type pipeline latency. So this is the problem that has happened in GPUs where the GPU hasn't been able to render the next frame fast enough for when it's due to be presented to the processor, to the display, sorry. So basically what time warp does is you take account of head movements and you can reproject based on those head movements the same frame. In real time. In real time. So basically this prevents the jerky effects of missing the frame that's created by the GPU. So when you move your head instead of being jerky, it's going to move inside the image in the direction you move in your head. The image will move much, much more smoothly. There won't be the jerkiness, it will be much closer to the actual head movement that you're making. So there are links between the sensors on the either in the phone that's in the head mounted display or on the head mounted display device itself, where the gyroscopes and other things are allowing you to work out the head pose and that's being fed into the display processor. So the display processor takes care of that. The GPU doesn't have to get involved. The GPU just continues functioning on delivering the game as good performance as possible. Nice. And you're talking about some fields here? Pre distortion we're talking about. So typically when you're using a head mounted device, you have some bottle top lenses between you and the display to give you an improved field of view or an apparent improved field of view. And if we go on to the next couple of slides we talk about this a little bit. So if you have the render frame, so the top left frame there is the frame that the GPU is presenting. So it's nice and crisp, it's exactly what you want it to be. If it's fed through a lens, you'll then get an effect called pink cushion distortion and you can see that it's squeezed in from all the sides and it has a negative impact. So to compensate for that, what we do is do the inverse distortion. We create bottom left there, we create a barrel distortion which offsets the pink cushion distortion and so when it goes through the lens you get a good rectilinear image which is what you want and which the GPU originally produced. So do you have to calibrate this to every different headset? Yes. All every different lens? Yes, you will have to. So maybe the user could do that or the manufacturer could do that? You'd expect the manufacturer to do most of that but most devices now have levers and things so that you can optimise it for yourself. You can change into pupillary distances, things like this. So part of what we have to do is ensure that within the firmware that we provide this can be upgraded, not upgraded, this can be adjusted by the user and this is something that the OEM will have to do in their software as well. They'll create software which will enable the user to calibrate it for their own use, for their own comfort. And you have some more things you're adjusting? So one of the things on this one, the top left frame was the one rendered by the GPU. The bottom right frame is after it's gone through the distortion correction but as you can see it's not clear, it's still a bit fuzzy and the reason it's a bit fuzzy is because of chromatic aberration. So as the different colour channels pass through the lens they have different refractive indices and so they reflect differently. So what we have to do is we take the same approach that we took with the distortion and we do the inverse of this before we pass it through the lens and that means we get to the stage where we have a nice sharp image which was comparable to the image that was originally produced by the GPU. It's because in science every colour has a different colour channel. Yes, so they're different wavelengths. So that's what creates chromatic aberration in every lens. That's right, that's right. So every lens will have that and you always have to offset that. Today it tends to be fixed in software and what we're doing, clearly we're doing it in hardware so we're alleviating the GPU from having to do it to make sure the GPU can focus on what it does best and we can reduce the power requirements and increase the performance of the end device. Nice, so you heard the SID display week to talk about all this because this is the mecca of the display industry and all the AR, VR display makers are right here and they all have to think about your solution and when they create the next gen devices. Exactly, what we want them to do is make them aware of what we're doing we also want to be very aware of what they're doing and what their future developments are so as we move forward we can get to a position where we're able to cope with whatever they are bringing out as well. So that's the key activity for now. I'm guessing maybe you have access to some of these totally amazing future AR, VR displays. The dream is people need to, like last year or two years ago Google talked about they want to have a huge amount of like 18K displays and with foveated and with eye tracking eye tracking, all sorts of things like this. And it's all more or less related also, right? They're related, the VR and the AR are different from the display perspective and the requirements are different but they are related and the things we're talking about here will be important in AR as well and clearly we're talking to many different people to understand what will be required in the future and being required in the future and we're helping to influence that so that we can make sure that we are able to address it when the time comes for it to be available. We're still a little way away from this in AR in particular but it's something Arm will be working on through over the next few years. What's the other challenges for AR that's not in here? If you think about it, it's related to light it's looking at transparent lenses and you really want to see on those lenses as well. Are you just looking for information? Are you looking for full video on there? Are you wanting them to be adaptive? Can you use a headset that doesn't look like a baseball helmet? Baseball helmet? American football helmet? So how about... Is it possible to have this dream where you have somebody sitting over there in AR exactly on the chair? And that's part of this too? That's where we want to get to. We're a fair way away from that being a common reality but that's where we want to get to. Alright, so thanks a lot for doing all these technologies that the whole world could potentially embrace. That's the aim. We just want to make it easy for people to use.