 I'd like to welcome everyone to day two of Tech On, and for those of you who were here yesterday, what we call the post-Kieberian one trillion era. Hopefully those of you who are here had a chance to get to see some of yesterday's keynotes. I thought they were all pretty interesting. There's a lot of stuff in the press already about it. We've got a really big day scheduled here for day two. Day two is traditionally when the exhibition hall opens, so the expo hall will open at 10.30. And I hope you can get a chance to get over and see that. There's a lot of obviously tech talks going on as well. But for this morning, we've got a couple other keynotes that we're going to have. Simon Seegers is going to come back on stage a little bit later this morning and talk about the connected community and everything that's going on with ARM. But before that, if you read Moss's and Simon's presentation yesterday, you saw Moss talking a lot about vision and the importance of that in the future. And I think it's timely that we're going to have Jim Davies with us this morning to talk about vision and imaging and how that kind of plays into ARM moving forward. So, Jim, who's been with ARM a lot of years, really smart guy, ARM fellow, VP of Technology for Pixels, and truly a question. Jim is actually not his real name. Stick around later and you'll find out what it actually stands for. But with that, I want to introduce Jim Davies. He said he was going to say something about me, and I'm bigger than worrying what it was. So, now we know. When you see this picture, what do you see? Do you admire the wonderful color rendering of our new projectors? Do you see them only? Do you still wonder who she was? Images create strong responses in people. There's an English saying that a picture paints a thousand words. And we know that people use vision as one of their most important senses. Psychology experiment after psychology experiment shows that so much of the communication between people is non-verbal. It's done by seeing each other. We know we are very visual creatures. But in 1503, just like today, imaging technology development was key. In this case, it was oil paints on canvas. And what Leonardo da Vinci did was to add beeswax to the oil paint to stop it drying so quickly and then cracking. Fast forward over 400 years to 1969, and the image capture technology has changed to large form of color film. But the power of a picture to become one of the most iconic images of the 20th century remains the same. Now, today, this might be shot from the drone. Back then, it was a man up a stepladder whilst a policeman held back the traffic. Something's changed, something's stayed the same. Oh, and for the younger members of the audience, I'll be around later to explain what film was. Fast forward another 50 years, the technology's moved on again. Another famous subject. And King's College Chapel. But this time taken on a more mobile phone. The image capture technology, of course, is now a seamless image sensor. The display of the captured image is now on screen with fantastic quality and resolution. Critically, the ability to create powerful images is now in the hands of just about everybody on the planet. Truly, the technology that all of us here create has changed the behavior of the world and will continue to change. So capturing and displaying great quality images has become absolutely vital to businesses based around more consumer electronics devices. The sort of technology that you create that arms products go into. Now, if you work in fields such as mobile, then the relevance of all of this is, I'm sure, completely obvious to you and you're already bored. But if you work in other fields, like embedded or IoT, then I'm here to tell you that this is going to change your world. This is going to transform your business. Since you will be pushed to deliver more interesting, visually capable devices. And I'm, of course, here to help you with that. In May of this year, we announced the acquisition of Apical. It was my big acquisition. Of course, I've been somewhat upstaged on that recently, but it's a little pungent loss over there. Apical was a global leader in embedded imaging and computer vision technology, and their technology is already embedded in one and a half billion smart devices in the world. They have an image sensor processor, an ISP, which is essential to turn the data that you get out of a sensor into something recognisable as a picture. So we now have a range of ISPs for the full range of devices that we go into from low resolution, right out to the highest megapixel sensors, using world-leading capabilities in dynamic range management, color management, noise reduction, all the usual. So we can now capture great images. We also have the technology to display great images. Assertive display adapts the display to ambient light conditions, producing a better viewable picture, and also producing the power, typically by about 30%, and it increases OLED lifetime. It's used by all the leading OEM handset manufacturers and is shipping in numerous devices. In fact, you've probably got it in the phone in your pocket right now. Assertive display is a display engine that fits within a display processor, and the Mali display processors from Mali DP550 onwards have a co-processor port, which was specifically designed for it. We've been partnering with that record for some years. So it just drops in. It's ready to go. We paired it up and integrated it and shipped it to several of the partners here today. So you can see now we can capture and display great images. Now let's look at the next leap in computing, how we interpret those images. Computer vision or teaching computers to see would be revolutionary. It's going to change everything. And let's just pick up what I mean by revolutionary. There are communication barriers between the digital world and the real world, where we humans live. Now touch screens were revolutionary and it did reduce those barriers. Along with mobile phone technology, we opened up the internet to a generation of people who couldn't access it through old desktop technology. But really, all we did was teach people new tricks. We taught them how to stab their fingers at screens and see what happens. Along with speech recognition and speech synthesis, computer vision has the capacity to knock down those barriers completely. And that, I believe, will be revolutionary, as Masa was talking about. Imagine life without those barriers between the digital and real worlds. What would that mean for you? What would that mean for your businesses? And what would that mean for the products that you create? See what I mean by revolutionary. We do, however, have a small problem. That well-known saying I mentioned earlier that a picture paints a thousand words. Well, Professor Stephen Hawking has added a modern twist to that. That same picture also uses up a thousand times the memory. And it's not just a well-known phrase. Look at these statistics to back this up. More data means more storage. A modern cheap hard disk drive is about a terabyte or so. So an exabyte is a million hard disk drives. This is a lot of data. My favorite of these statistics is the ratio between the duration of video uploaded to YouTube per unit time, currently running at 500 hours of video every single minute. Just think about the data. Even with better compression from things like HEVC, a single 4K 30 security camera will fill up the disk drive in about 24 hours. So storing pixel data is a problem. Transmitting it to be processed elsewhere is even worse. I mean, we just cannot afford bandwidth for the power. There's too much surveillance video being created to sit, watch, and interpret it manually. Certainly not reliably. So can we interpret those images on device to extract meaning from them automatically? If we can do that, we can reduce this data size considerably. Now, as you have a visual impairment, seeing is easily, as Massa was talking about from the Trilobytes onwards, just about every big living creature manages this. Now, science has very good understanding of what happens on the left-hand side of this picture, where photons hit the eye and electrons go down to the autopsy nerves. But I'd argue that what goes on in the right-hand side of the picture, what goes on inside the brain, is still pretty poorly understood. It's a topic of very naive research. And of course, if we don't understand how the brain does it, it's very difficult to copy that and model it on a computer. In 1966, at MIT, they started a summer project for undergraduates to connect a camera to a computer and get them to interpret the images to see what was in those images. They thought, well, we would do it. It would take about two weeks, so given that it's students and they've got to do experiments, write out some papers, we'll call it a two-month summer project. It took about 40 years, actually, before it really started to work. And 50 years after that experiment in nano-2012, when Alex Nets beat humans in the ImageNet Challenge, supercomputers finally became better than humans at recognizing the contents of images of identifying them correctly with fewer false positives. Today, in part, he's taken computer vision mainstream. We're taking it down to all sorts of devices, battery-powered devices, and that's going to change the world. And, of course, this isn't just, you know, future magic. We have real use cases presented up here. fully autonomous cars obviously get all the press at the moment, but they have very well-defined use cases, and if we can do that, then we'll also need the ways of the future for other market use cases, such as robotics. And it's not just those motive. Computer vision and imaging technology is fundamental to many other use cases. Augmented reality and virtual reality are also very high in profile. Augmented reality is really virtual reality plus computer vision. And many other partners here today are already having great success with devices based on our technology. Now, you may have some concerns about everybody walking around wearing helmets, and perhaps I share some of those concerns, but I think this is very much version one of this technology. And the ultimate applications are going to be huge. I mean, you saw the effect that Pokémon Go had, and that was really quite simple. Just imagine that done properly, rolled out almost everything you do. Perhaps more seriously, we've also recently seen an AR-based system which presents a truly virtual presence at a conference table where you can talk to your colleagues from around the world. And this would surely be very attractive to many of the people in this audience who do rather too much travel, as I'm sure some of many of us do. Security and surveillance and video analytics are marked information in hundreds of millions of units now. And it's not all big problem. Finding spaces in car parks doesn't need sensors under every parking space. It'd be cheaper and much easier for a much smaller number of cameras and some vision sensing capability. Detecting overcrowding in the metro, raising alerts to stop people crowding onto station platforms. These are things that are all happening now. We can even use CV to detect people falling onto the floor without streaming video of your grandmother around the internet and all the possible security concerns you might have over that. And some use cases enable other features invisibly, a bit like art. Photography is a huge use case for computer vision. People in face detection can be used to drive three eggs, auto focus, auto exposure and auto white balance. Most people, most of the time have faces of the people in a frame to be correctly focused and exposed. A camera that can do that or that can identify a child and that's actually rather easy to do. You just compare the ratio of headlight to body height and then auto focus on the child's face in that frame will produce pictures that, again, most people will think of great images. And this is all happening now. And there are a number of reasons why this change is happening now. Mobile devices, mobile's key for mass market technology adoption. We've seen this many times before and we're seeing it now with computer vision. Only a few years ago, the capture of great high quality images would be just for specialist cameras very expensive. But now this is in the hands of just about everybody on the planet, even in cheap phones. And there are some effects of that. There are now between four and five billion cameras sold every year. If you think about one for each face of a camera or a tablet, throwing laptops, digital cameras, surveillance cameras, you get up to those sort of numbers. And as a consequence of this the economics of using a camera as your sensor of choice in your new gadget becomes absolutely overwhelming. This is all creating a perfect storm. First take the economics of mobile and add in the process of developments that we all know and understand bringing tremendous computing power into very small battery power devices. Add in the technologies like deep learning and neural networks and computer vision can now become completely transformational. Now for us, this means we need to ensure we have the right IP. For you, this creates a massive opportunity for companies looking to tap into growing markets or indeed to define completely new markets. A recent Traxica report put this, valued this at $38 billion for market opportunity by 2018. This is real money. So to be clear though capturing and displaying great quality images and also interpreting images even when those images have been taken in difficult conditions. Think about your car in the daylight in the dark, in tunnels, in bright sunlight will be at the heart of so many of the devices that you build and that you develop applications for. Whether those are personal computing devices or intelligent autonomous machines, the effects of this on all of your businesses will be profound. So how can we help you with this? Most of the original research took place on supercomputers and mega servers and what we've done is we've optimized those CV algorithms and machine learning algorithms like TensorFlow benchmark them on ARM CPUs and Mali GPUs analyze those workloads to see where they should be most appropriately run. We wanted to see what you can do in a sensible power budget not just on water-cooled machines. And unsurprisingly, some of this code is better suited to running on a CPU and some to running on Mali GPUs. The Mali G71 GPU based on the new Bifrost architecture is a big step up in performance for this neural network code. And as the algorithm has changed from 32-bit floating point used on servers down to 16-bit floating point or even 8-bit integer down in devices further increases in performance will be obtained. And we're always looking to improve the CPU and GPU capability as time goes on where we get one of these very important use cases like neural networks and computer vision. But we can also take this one step further. And so I'd like to talk to you now about ARM's new engine that was designed specifically for computer vision. In fact, just take a look at this video. The third element of ARM's new IP portfolio is a small silicon-area IP block with embedded firmware to enable on-device computer vision at really low power. It takes in video or raw sensor data of the scene and converts it to a full digital representation of key features recognising elements in the scene like people calculating which direction they're facing and identifying them if it has seen them before. It's capable of identifying an unlimited number of objects in the scene in real time and is highly suitable for battery-powered portable devices. That video showed us running the CV engine in FPGA in the atrium at ARM Cambridge, detecting and tracking people, including a mystery guest you might have spotted. This high resolution full frame rate capability coupled with extremely low power actually enables new use cases, things you couldn't previously do. And new device categories can be created with new businesses associated with them as well that weren't possible. The IP's been licensed by multiple SSC vendors for next generation smart cameras and smart sensors. So now let's look at how this all fits together. We now have a fully integrated suite of IP including the new additions from the acquisition. We have a assertive camera, Mali camera, we have a assertive display and of course it all plugs together using our cashcode here in interconnect and it all works together very nicely as you might expect. Of course that's not all. We're working hard behind the scenes. We've already integrated the IP but here's just a flavour of some of the things we're going to do to make it even more effective. Of course we'll introduce compression Of course we'll introduce special optimised paths between the ISP, the GPU the CPU and the vision engine and those special optimised paths will reduce power, reduce bandwidth and reduce latency of course. We're already using the computer vision engine to do people and face detection to drive our video encoder. The region of interest encoding capability enables us to maximise encode quality where it matters and reduce the size of the video stream and of course more in due course. I want to give a shout out for a number of other sessions that are going to be here today. Later on today I'm going to be sitting at an analysed round table talking about some of these things and answering your questions. This afternoon there's a great session from some of our guys talking about computer vision and deep learning. I encourage you to see Judd talking about all of the IP that came from APICL now the imaging and vision group and Thursday afternoon, tomorrow afternoon there's a great session on VR and you really will see that. So now let's look at the big picture. How does this affect you? There will be new opportunities for the ARM ecosystem arising almost daily to make more interesting and compelling devices with new capabilities. I can hardly think of any markets where the demands for better imaging and vision capabilities will not be felt. From traditional camera markets through new devices like drones right down to the internet of things where the economic implications of being able to fit a low cost vision sensor and some CV capability makes that a sensor of choice for low cost devices in IoT. So if you're a silicon partner or an OEM if you're a software developer or a service provider you'll be able to create new devices that will become contextually aware that will do different things depending on who and what is near them. And you'll be able to create new businesses based upon those devices. You, the ARM ecosystem will have an amazing range of new opportunities and I look forward to seeing you create your own masterpieces.