 So we're here at Mobile World Congress 2018 and you launched a new hold solution. Yes, a whole new suite of IP. We're calling it Project Trillium and we've announced a machine learning processor, an object detection processor and a whole range of supporting software, optimized libraries of software to support both our existing CPUs and GPUs and also in time our new ML processor. Because some of these latest chips for the smartphones, for example, they talk about, suddenly they have a huge amount, trillions of, not trillions, billions of... They want to have, so yeah, the figure of merit is numbers of operations per second and our new machine learning processor in a mobile power budget will probably provide about four and a half trillion operations per second, whereas what you're seeing at the moment is people putting in hundreds of billions of operations per second. And they also talk about billions of transistors and a bunch of them are being used for AI. So that's gonna be a big part of the SoC. It's not so big actually because it's a power limited problem. So there's only so much electrical power available inside the chip and we can use pretty much all of the available power in a very small area because we've produced a really efficient design of processor. And that's great for imaging. What is it great for? So many of the initial applications are probably to do with image analysis. A lot of the applications that are there around things like detecting people, identifying people, analyzing what those people are doing, pose detection, detecting anomalies like, has your grandmother fallen over and is she lying on the kitchen floor? For example, that's an abnormal situation. And also in public surveillance, we're looking at crowds, checking on the number of people within a certain space. That sort of thing. But also audio, so speech detection, having multiple microphones, extracting voice from background noise and also turning that audio signal into understandable speech. That's also being done these days with machine learning. Indeed, I'm pretty confident machine learning is only in its infancy. Every time somebody takes machine learning and applies it to a problem that was previously done through classical methods, we tend to get better results. You get more accurate results, fewer errors, that sort of thing. And the apical area of the IP that comes from there is it related with what you're doing right now? Yes, so the object detection processor comes to us through the apical acquisition. The first generation of that IP was the spirit processor that came from apical. We are now licensing a version two of that IP. So it's going to go into smartphones and the cars? Where is it going to go? It's going to go into all sorts of places. At the moment, it's present in smart cameras. So the spirit IP that we previously had is already present in silicon and is being used in devices like the Hive smart home camera. And we foresee it being used in a lot of places like that. And it's a buzzword, AI, the machine learning is like important. Is there many different ways of doing it? Is your IP like flexible in being programmed in all kinds of different ways? So the machine learning processor is extremely flexible and is capable of accelerating the workloads of all sorts of neural networks. The object detection processor is much more focused on doing exactly that, that one thing, detecting those objects. But we tend to prefer using the phrase machine learning as opposed to artificial intelligence because artificial intelligence is a phrase that's become so overloaded, nobody knows what it means anymore. Is there like in the GPUs and in the CPUs a multi-core configuration or something like that? How do you make a huge machine learning if you need one, huge one? What is it called? So we've designed a completely new architecture from the ground up using our GPU methodology. We analyze the workloads, understand what works well, where there are performance bottlenecks and using that as a basis, that understanding of the workloads as a basis. We produce this brand new architecture which is underneath our machine learning processor. And from that, we can scale across a wide variety of performance points. So the first processors that we're releasing mid 2018, this year, we are aiming at mobile devices and smart cameras. But using that architecture, we can scale down to a very low power point so that it's suitable for using always on devices and we can scale up through automotive requirements into servers as well. How does it plug into heterogeneous computing or programming? Is there like a whole programming area and platform people need to learn programming? No, we are plugging into the existing software frameworks like TensorFlow and Cafe and the developer frameworks that people already know and love. And we are plugging our MNN, is our software framework, into that to accelerate it and to accelerate it on current ARM CPUs, current ARM GPUs and also our new ML processor when that becomes available. So people won't have to learn something new to use. So in the future there will be some SOCs that will have ARM, GPU and ML and more stuff. Absolutely. And that'll provide some amazing cool things. Amazing cool things. Yes, that's what... We're in the business of providing amazing cool things. And people are going to be using their devices and it's going to be providing some kind of like, wow, we're actually like, oh, how did they know that or something? Some things will vary them. Some things will be just, oh, that's good. So for example, we will use ML in silent ways. So silently enabling better use cases. So for example, a lot of the portrait mode in cameras on smartphones these days is using machine learning to identify the parts of the image that is a person. So that's the outline of you and behind you is a load of background. Don't care about the background. Defocus the background, but bring out you and people will just go, oh, that's a nice camera. They won't know anything about machine learning, they don't care, but it will just be a nice camera. So you'll get both things. You'll get the wow and you'll get the, oh, that's nice. Is it going to optimize storage or networking or other things? The networking guys are particularly excited about machine learning capabilities because networks these days are getting so big and so complex, they're actually very difficult to optimize even with genius level network architects. Whereas machine learning has some very interesting possibilities in this space. Could it be battery consumption tools? Absolutely. One of the reasons we're producing this new ARM machine learning processor is precisely to give you better performance within a certain battery budget.