 So we're here at the ARM booth checking out the HDR technology here, so hello, so who are you? Hi, I'm Judd Heap. I lead marketing for the IBG group inside of ARM. So what are you doing here with HDR? So what we're doing here is we're showing the ability to take HDR10 or hybrid log gamma content and properly map it and display it on a standard dynamic range display. So this might be used in the case where you don't have a really bright display panel like a bright television but you still wanted to enjoy HDR content on a standard monitor or display. So how do you display HDR? Do you need a special display number? Normally you need a display that's very very bright and it's very very good contrast ratio but mobile devices and tablets and things like that don't have that because it would drain their battery power to be so bright. And so what we're showing here is the ability to take that same content, map it properly onto a regular display so you don't lose any of the data. So what's going on? What is this for? This is an ARM, what we call a Juno board. It has an ARM Cortex processor and then in here on the stack here we've got a stack of FPGAs. One of the FPGAs is doing a decode function for the video. The other FPGAs implementing the core that we have called a sort of display which is one of the products from Apical that was acquired by ARM. So what do you do with Apical? So at Apical I ran their field support team and previously was their engineering lead. So Apical right here was acquired by ARM? In May. May of this year, that's right. So what did Apical do? So Apical is an imaging IP company that again was acquired this year. We have products in display and in camera and also computer vision. So now all those products are within ARM. So computer vision you can do all that stuff from ARM CPU? Actually we have a hard wired core for that. Yes you can, you can do that on ARM CPU or in the Mali GPU but the solution that Apical brought to ARM was a fully hard wired solution in IP form for computer vision. So on the SOC you have a place where you have the IP for computer vision and you might be sitting next to like an ARM Cortex A7 processor. Is this only recently that this kind of IP is added to SOC? Yes actually, I think the first SOC with this IP is coming out later this year. So it's not even yet? No it's still very new, still very new, yes. So this is a company called Mobilius that was acquired by Intel. Is it a competitor to your technology? It's a little bit different. Our design is a fully hard wired design that can do very high frame rates and very high resolution but it's doing the same type of application. It's doing object detection. They also have the SOC right? That's right. And now Intel has the SOC. And they're in the Factor 4 drones. So is this stuff that could be in a drone too? It could be in a drone? Yeah this technology could be in a drone, could be in an IP surveillance camera, could be in a mobile device. Anywhere you need to detect objects especially like people, that's what we're targeting. The computer vision technology is brand new but our camera design and our display design are already in many many devices around the world today. Hundreds of millions, exactly. Our display engine has shipped more than 500 million units. It's awesome, yeah it's great. I love knowing that someone is holding a product and our IP is inside that product. APICL is based in London and we have an office here in San Jose. Our main engineering center is about 90 minutes north of London. That's right, yes. The inventor of most of the technology, all of our engineers are based in our plant in Loughborough in the UK. So how did they invent it? How did they come to work in this kind of field? Well, so our CTO back about 10 years ago started working in the field of understanding a digital model of the human eye and that's where he came up with the local tone mapping algorithm that's used in a few of our products. And it seems to be Vinkiel here, the ArmTechCon 2016. Yes. It was just like a... Yes from Jim Davies, that's right, yes. He said that the most important thing that ever happened was the eyes. Very key to our technology, that's right. And there's a lot of issues we can fill in. You can't just do everything in cloud. That's right. You have to do it at the edge and our computer vision solution is tailored just for that to do people recognition at the edge in the camera device rather than doing it in the main cloud of the network. So, are you part of this solution there too? Yes, this is the computer vision solution and so what you'll see here is you'll see boxes drawn around people. If you can see some people on there, you'll see that faces are in green, upper bodies are in blue and full bodies are in purple. So, how does that work? This is an FPGA right now? It's an FPGA right now, yes, but it's the core, it's a hardware core running on the FPGA that's able to do all of these detections in real time at 1080p60. Yes. From this camera right here. You can detect how many people are there? Unlimited. Word and looking? That's right. Unlimited? Unlimited number of people per frame. And recognize the ones you need to know? Well, it does detection, recognition, if you wanted to find out someone's name from their face that would run in a CPU away from the IP. But the IP is detecting people in terms of face, upper body and full body. So, you know, in 1080p60 and can do a general object at about 60 by 60 pixels. So, quite far away from the camera. So, is there any limit in how many applications you can think of using this technology? Or you can just, like, never sleep and just think about stuff all the time? It depends on what our customers want to do with it. So, again, we've concentrated on people detection but you could reprogram the engine for other types of objects. You could do things like road signs, for example. Road signs? There's so much IoT and the IP cameras and there's so much stuff that needs to be done now with your technology. IoT is a great area, actually, because with this engine you don't have to output video. So, you could put a sensor in a corner of a room and make sure that someone is in the room with better accuracy than, like, a PIR detector and output metadata or text-only that says that there's someone there rather than having to transmit video which has privacy concerns and all kinds of other bandwidth problems. And you are improving consumer devices? Improving how they display content? Yeah, so we have a lot of devices on the market today using a product called a SIRTA display which gives you better viewability and bright sunlight and also saves power indoors. And you can see that in many, many tablets and handsets today. So it adjusts the brightness of the... It does pixel processing to adjust... It's a technique called local tone mapping and it's doing that without changing the backlight so you're not wasting power by making these adjustments. Are we just... I mean, it's got to be exciting, right? Absolutely, we're very happy to be a part of ARM. This ecosystem is huge. It is. And the potential of the next step is to... I guess this is ultimately very exciting. It is, absolutely, yes. So ARM is very much behind what we're doing at Apical now actually the imaging and vision group within ARM. We'll be expanding our products expanding our roadmap and so our customers will be very happy with what's coming in the future.