 Felly, we're here today to introduce our new display processor, Marlady 77. This is the global launch of our new display technology. My name is Ian Hutchinson. I'm director of Outbound Marketing for the client line of business. Client line of business looks after the things like smartphones, DTVs, laptops, wearables, et cetera. I'm with my colleague Roger Barker, who's going to talk a bit more about the detail of the product when we get to that. So, it's the first time we've interacted with most of you on a face-to-face basis, so I'm really pleased to see you or some familiar faces and the majority of you are not. So, I'm just going to spend five minutes giving you a bit of background about ARM, and then I'll say hand over to Roger. So, I'm hoping, mostly you've heard about ARM, even not directly about our display products but about the company itself. So ARM is really ubiquitous in the world of technology. In the areas of smartphones, DTVs, embedded, we've been around a long time, and some quite amazing numbers, as Shree said, 130 billion ships over the past 30 years. But you may not have heard about ARM, because sometimes we are the secret hidden source among many of the key technologies. And as we're sort of moving into new areas, such as automotive and infrastructure, we think the name is going to get more and more popular. So, how do we get here? A little bit of history. So ARM was formed about 30 years ago in a barn in Cambridge in the UK. And our first products really appeared to some of the cutting-edge technologies, very, very early on. So you can see some of the examples of that on the left-hand side. But it was really from about 1993 onwards that things started to take off. So our CPUs, which are really incredibly energy-efficient, they were designed to some of the first mobile phones and then into the smartphones. And I don't think it's an exaggeration to say that actually ARM technology drove the mobile revolution from 93 onwards. And now, here we are today, our technology appears not just in those phones, but in multiple devices, almost everything you can think of. So from automotive to cars to your wearables, to your laptop, to the projector, to the air conditioning, it's almost certain there'll be some ARM technology within there. But what does it we do? Quite often we get asked, what do we do? What do you make? So ARM is an IP company. So we design IP. And in our instance, from the client line of business, we have Cortex-A processors, we have machine learning processors, and we have the Marley multimedia suites of products. So we design the IP and our partners, such as Qualcomm, MediaTek iSilicon. They take that IP and they design and build their own SOCs, systems on the chips, that their partners then build secure devices of all shapes and sizes, such as the smartphones, the DTVs, the wearables, the goggles, et cetera. And it's these devices that allow the end user, so you and me, my kids, my friends, your friends, your relatives, it allows these to actually enjoy these interactive and immersive experiences. And this sort of pervasiveness has led to some quite huge numbers. So Shreementioned 130 billion. Well, last year alone in 2018, ARM Cortex CPUs, our partners shipped about 23 billion of them. So that's 3 times the actual population of the earth. And they appear in our case in things like wearables, smartphones, modems, et cetera. And you can see some market share stats actually at the bottom of the slide. You know, over 95% in some very large selling categories. And we have over in the market over 20 billion cellular modems powered by ARM. But it's not just about CPUs. The CPUs were the first piece of IP designed into those mobile phones 25 years ago. We also have other products, such as Mali and our multimedia suite. So Mali is not only GPUs, but also deep display processors as well. Now GPUs are very, very successful. They are the largest shipping GPU in the world. And last year our partners shipped over a billion of them in various devices to consumers all over the world. And they really help power user experiences that we see today, such as gaming, AR, VR and AI. It's really that IP coming together on the devices that allows that. But as I said, Mali is not just about graphics. We also have display technologies. Now our display technologies are relatively new compared to the other bits of IP. So there's no staggering numbers yet, but we'll get there. But where we are today is a series of products that serve Clamshell, premium and home markets. And that's covered by Mali D71, which is our current premium display processor. Sitting alongside Mali D71 is AD5, which stands for a Cert of Display 5. Cert of Display is a technology that allows the viewer to view any content under different life conditions and also allows the viewing of HDR content on an SDR screen. So they serve what we call the premium markets. And then the mass market is served by their little brother and sister, Mali D51 and a Cert of Display 3. So that's where we are today. I said we're here to launch our new products. In approximately 10 minutes time, the worldwide embargo lifts for our brand new product, which is Mali D77. So Mali D77, brand new display processor, really focusing on VR, on head-mounted displays, whether they be standalone or within a sort of a holster setup. And to tell you more about this exciting new VR technology, I'm going to hand over to my colleague Roger Barker to give you some further details. Roger. Thank you, Ian. I think we should all stand up again. No, no, thank you. I'm very tempted, though. Right, right. Thank you, Ian. Right, before I move on to the product, I want to talk a little bit about head-mounted devices and how we see things going over the next few years. And clearly, with head-mounted device, you need a display that's held very close to your eyes. As it's held close to your eyes, you need something to increase the field of view, so they tend to have nice big lenses in front of them. And it gets very, very complicated. As you move forwards, we're going to have to get higher and higher resolutions on these displays. And within a couple of years, we're going to be seeing 2.5K by 2.5K per eye. So 5K by 2.5K at up to around 120 frames per second. As we get to that sort of level, the amount of pixels and information that has to be processed through the processors out to the displays becomes enormous. And it's a real issue at the moment, managing to get the SOCs powerful enough to move all these pixels fast enough to give a good experience into the headset. And this is part of the issues that we're facing. So typically today, frame rates are around 90 frames per second. They're driven and controlled through the GPU. The GPUs are often only going at about 45 frames per second and then they're getting doubled up. And this causes problems when you're wearing things close to your head. As you move your head, there's delays that creep in. The electronics, the pixels can't be processed fast enough to actually enable your head to move smoothly with the display. And this is part of what makes people uncomfortable when they're wearing them for long periods of time and has led to things like motion sickness. And we've got to find a way around that. And as Ian was saying, we have a very strong position in GPU land. We have a lot of GPUs that are in a lot of handsets that are used for this type of product. And we started analysing a couple of years ago, what are the bottlenecks? What are the problems? How can we address them? Do we just continue to throw more and more power on the GPU or do we look at other innovative ways we can address this? And this is part of what we've done with Mali-D77. So, what is it? It's a new display processor IP. It's specifically designed to address the special requirements for head-mounted displays, which is designed to work either with a standard smartphone, so a standard flat-panel display, but equally as Ian said earlier, into pluggable products. And that, typically at the moment, means they're only really used in premium mobile phones. But you can also use this in all, in one head-mounted displays or head-mounted devices. So, if we're going to make this more successful, we've got to address the problems that are experienced in the head-mounted devices. And this is specifically the latency, which is related to head movements. It's reprojection, so as the head moves, how do you make sure that you're moving what's seen in front of the eyes quickly enough so that you don't make people feel ill? And also, how do we handle the effects of those enormous bottle-top lenses that are typical in VR headsets? So, as we said a minute ago, we are very strong in GPU land. So, why don't we just throw more GPU processing at this? And when I'm talking about VR processing here, I'm talking specifically about the problems of reprojection lens correction. I'm not talking about how we actually create the game or what have you. That's done nice and cleanly. It presents nicely onto a flat panel display today, but when we put the additional workload of having to overcome the latency problems, reproject and do the lens corrections, we need to use many more GPU cycles. And it's somewhere between 10% and 20% typically, so we put an average there of about 15% extra GPU cycles. As we do this, the GPU has to keep switching context from delivering the game to actually carrying out the reprojection algorithms and fixing those. So, as it's switching context, this also makes it less efficient because you stop processing the game, you will get to a position where you start missing the deadlines for when you're updating the display. And this is where you start to see problems in what scene, you start to see artifacts, glitches in the system as it moves forward. So, as you can see here, we have the GPU, it's talking many times to the main memory before it's sending it off to a display processor like Mali-D71, which will then compose it and deliver it to the screen, whatever screen you're using. What we're doing with GPU-based VR processing, we're moving the requirements to perform things like reprojection and lens distortion correction from the GPU into the display processor. So, we have an inline processing through the display so we don't miss the deadlines. The display has got all sorts of additional high-powered IP in there as well. It will optimise and improve the scaling. It will enable you to define it specifically for the display processor that you're using. We can add things like a certain display to make sure that we're doing all sorts of corrections so that we're optimising the performance in the screen. But at the same time as we're doing this, we're doing this in a single pass through the display processor. That alone will save us around 40% of the DRAM bandwidth, which is enormous. It makes a huge impact. So, we know that today we can do around 2K by 2K per eye in Mali-D77 at 90 frames per second. Not at 45 seconds doubled up, we could handle it at 90 frames per second, just firing straight through there. So, let's talk a little bit about these terms that we're going to be using and what D77 provides. So, asynchronous time warp is a well-known phrase now. I'll talk a little bit more about this in a minute. I assume many of you know what it is, so you can correct me later. Then talk a little about lens distortion correction, chromatic aberration correction, and as well as that, we can compose up to four VR layers in a single flow through the display processor. And I said 4K90 capable, but we're optimising it around 3K at 120 frames per second. So, as these requirements increase, we will continue development, we'll continue putting this in different formats in different ways with different levels. So, we'll start adapting the D77 and have products derived from it to address the increasing performance requirements. The top layer, I mean, it's just eye candy really in a way. So, the top layer is some RGB images that we're distorting in different ways. So, across the X, Y and Z axis, the bottom set are clearly from a video stream. And one of the advantages about video stream, and I have to cheat here because I always forget this bit. So, I'll admit that, is that we're looking at quad projection examples. So, quad projection is the process of positioning the rectangle in a 3D space. So, each of the corners of the video represent a 3D coordinate for Mali D77 and we're able to project these quads onto the screen positioning the virtual 3D plane within the display. So, let's talk about time warp. So, time warp, it's a widely used method of reprojection to mitigate for display pipeline latency as it says up here. So, the bottom left diagram there is an illustration of what we're doing. So, we have the image that's there. We move our head slightly. We have to, if we haven't already got a frame ready because it hasn't come up from the GPU fast enough, we just adjust how we're projecting it so we're taking account of the head movement. So, the head movement is fed into the display processor as well. So, the display processor knows how it's moved and works out how it's got to project the actual display. And this now we're doing in hardware. The pre-distortion, we're combining time warp with lens pre-distortion to improve the overall display pipeline efficiency and I'll talk a little bit more about that in a second. So, geometric distortion is caused by the bottle top lenses. So, if we render a normal frame, so this has come straight from the GPU, it's a normal frame. It's a picture of, I won't embarrass him but we nicknamed a member of the team is closely related to this gentleman. In fact, it could be him. From here, if you render the frame, you then pass it through the first VR lenses and what you start to see is a geometric distortion known as pincushion and pincushion distortion. So, we need to adapt that and correct for that so that you see the image exactly as you want to see the image on the VR device. So, what we do before we actually pass it to the lens we've formed the invoice distortion and this is applied on the rendered frame so when it then passes through the lens we see a proper rectilinear image in the head mounted device. So, this is how we compensate for the geometric distortion. But the sharp-eyed amongst you can soon see that this is a little blurry. It hasn't completely solved the problem. There's still other issues for us to address there. So, what do we do? The colours aren't aligned and this is just a simple fact because of the different refractive indices for the colour channels. So, the colour channels are separated as they pass through the lens. So, how do we fix this? We actually do the same as we did before. We apply inverse chromatic aberration before we pass it through the lens and this has the same effect so that instead of having the blurry image on the left we end up with a nice pin-sharp image that we have on the right-hand side. So, all these adaptations are being applied in line as the data is flowing through the display processor. We're not going backwards and forwards between memory and processor. We're going in line and delivering straight away and this is how we can guarantee the frame rates and hitting the frame rates that are required in the modern devices. So, I just want to backtrack a little and Ian spoke about Mali-D71. Mali-D71, we originally announced roughly two years ago now. It was the first product within what we call the commedia architecture. And the reason I want to highlight it is that this was specifically developed to ensure that we could guarantee high frame rates at high resolutions. So, we were always working on having 4K90, 4K120, 4K90 for cell phones, 4K120 for devices where you could guarantee availability of the memory bandwidth. We started embedding things like TPUs for prefetching so that you actually had the data there. You never had to wait for data. We've got six polyphase scalar engines in there. So, we have very high quality scaling. So, if for whatever reason you need to render from the GPU at a relatively low resolution, we can always scale up in the display processor again as part of the flow-through operation. We have our own arm frame buffer compression within here as well. So, we compress the data as it flows through. We can handle it more easily. We can apply things like rotation much easier by using the compressed formats. We have a co-processor interface which links directly to our assertive display. So, if we need to do some tone mapping, if we want to add in the content adaption, sorry, content adaptive backlight control from assertive display, we can add that in as well. Add that into the solution. It's separate IP but we add it to our display solution so that we're delivering a top quality output. So, this is Mali-D71 and the reason I've highlighted that is that all we're really doing is adding the additional block. So, the lens correction, distortion and chromatic aberration correction and the asynchronous time warp. It sounds very easy when I say it like this but it's not. And clearly, if you think about what we've been talking about, are all these lenses the same? Don't think so. We have many different lenses so we have to be able to program the D77 so it knows exactly what the interpupillary distance is. It knows all sorts of clever things about the head-mounted device so that we can apply exactly the correct corrections to the devices as we're projecting into the screens. So, this has taken a couple of years to be done. We're the only people that are doing this inside the display processor today and we are convinced that this is the right way forward to enable VR and AR in the future. Clearly AR has different requirements but this is also a building block that will be used in that type of arena as we move forwards. So, it's something that we're focused on taking the power out of the processing, putting it in line so that we can guarantee performance. Sorry, is that the question? Yeah. So, I'm going to take a pass on that in a way because the foveated rendering, of course, is key to how this is moved forward. It's an alternate approach that is mainly being run through GPUs at the moment. We are looking at this. From the GPU point of view, we can handle that in our own GPUs as well. So, it's another additional tool available to help us provide improved performance at the very highest level. I'm not an expert on it so I won't try and talk any more than that but it is something that ARM is working very closely on as well within the GPU side of the business. Eye tracking, we're investigating at the moment. So, yes, it's something that has to be done. There are many different ways of doing it but it's something that we're very keen on and we're working on at the moment. We're not in a position where we're going to talk much more about that yet. But clearly, all of these are areas of great interest to us and with our background in low power, it's critical that these things are done with as low power as possible. We don't all want to walk around in motorcycle helmets. I certainly don't. So, this is a brief example. So, this is actually the D77 running inside a relatively small FPGA within our development system. What we have there on the top, we have a regular phone which is running Android AR core and we're using that because that's working out the head pose, the different angles of the movement that we're putting through. That's then feeding through into the FPGA and the FPGA is amending, as you can see in real time, the position and what's presented on the screen. So, everything that you're seeing there in the screen is being done on D77. There's no GPU interaction at all. And we have, this exact piece of kit is sitting booth 842 over the next couple of days. So, please come and look at it, ask questions of it and you can see what's there. Next to it, we've also got a demonstration of assertive display 5 running also on the Mali D77 as a separate thing. Yes, please. The D77, you'd expect fit in the VR headset. So, it depends on how you're handling it. So, the reason we've done it in the way that we're doing it is it can either sit in the phone or it can sit in the head mounted headset. So, if you think of a pluggable headset where you're putting a phone in, then it's in the phone. If you're having an all-in-one, whether it's linked to a games console or linked to anything else, then it would have to be in the headset. So, the D77 is what we were announcing today. We think it's a game changer for display technology in head mounted devices. We've added lens distortion correction, chromatic aberration correction, reprojection and time warp. And all of that is in a single pass through the display processor, which offloads the GPU significantly and allows you to use the GPU for either doing more or you're reducing the GPU requirements so you can actually deliver it to a lower grade of product as well. So, we're looking at this from all angles. The VR processing in the GPU is of higher quality. So, the output to the displays is of higher quality than you get normally through a GPU and we can show that on the stand if you want us to. And we believe this is the best route to get to a position where you actually have a very light, small, much more comfortable, cheaper head mounted device for people to develop for their customers. So, finally, our lead architect who was the person who started off this program and did a lot of the initial works on this program is doing a talk on this product about what we're doing, how we're going about doing it in far more detail than this, that's starting in about 50 minutes in room 220B. So, if you want to learn a lot more, that's a good place to go. There's also the paper that he's basing the talk on is available within the information that's provided. So, we are excited about this. We hope it's of great interest to you. We believe it's a very cool product. The demo is good, so please go and look at it. There's a couple of people who are very keen to talk you through it. There's at least two. There's probably many more people there doing so. A couple of them are sitting at the back in the corner grinning. So, we hope this was of interest and thank you for your time. Any questions? Yes? So, actual hardware, we'd expect to be around either Christmas next year or CES just after that. So, typically IP, it takes roughly a year to get the IP into an SOC, maybe a little longer than that and then up to another year to get it from that into end products. So, roughly two-year cycle. It's a different cycle to many. So, we're hanging around for a while, so by all means come up and ask us anything you want to ask us if we can answer, we will. If we can't, we'll duck. Oh, yes, sorry, that's the right answer. Okay, we'll leave it like that.