 So we're here at Lenaro Connect here in Vancouver and who are you? My name's Jim Davis, I'm an ARM fellow and I'm now General Manager of ARM's new machine learning group. So there's an announcement where ARM is doing something with the machine learning with Lenaro? Yes, indeed. So the content of my presentation today was ARM is announcing two things. The first is we have found a member of the Lenaro Machine Intelligence Initiative and that we are donating to that initiative the open source software ARM and N which is ARM's software inference engine for machine learning designed specifically to run at the edge on the wide variety of ARM devices. So you are providing a whole bunch of open source software for the machine learning. So is that the algorithms or what is it the protocols or what is it? So the inference engine is we take in a model and some data and we run the model on that data and actually perform the inference stage and the code it consists of both that inference engine itself and also it's based on top of a large quantity of extremely optimized code to run on CPUs and GPUs and our new neural network processor because actually we find that a naive implementation versus a properly optimized implementation can be two orders of magnitude difference in performance. I mean you can get a 100x speedup by optimizing these things properly. 100x speedup, running it on CPU or GPU? Just running naive code, running optimized code, properly understanding the effects of the memory system and the data, yeah you can get anything up to 100x. So the optimization is incredibly important and ARM is in a position with its capability of understanding the micro-architectures of the CPUs and their GPUs and our ability to do in-depth performance analysis. We're in a position to be able to do this probably better than others. So without trying to get any secrets out of what kind of machine learning they're using but Huawei I guess is doing one, Qualcomm is doing another one, maybe Apple is doing another one. Many of the ARM licenses are doing their own right now or some of the... I believe that what Qualcomm was announcing today is that it is moving to port to use the ARM in a software framework. So people are already seeing great value in what we are doing. It's an open framework so if people wish to provide their own neural network accelerators they can plug those into this framework. So it's possible to just plug in? So does that mean that people have their own machine learning on the die like physically? Yes. So then they kind of hook into your protocol? What's it called? Protocol. So yes, several of the ARM licenses are implementing their own dedicated neural network processors just as we are. We're building our own to license to those partners and there is a need for that framework to be open, to be documented, to be easy to use to get people to plug into this framework. And is this, for example, right now there's some 7nm coming on the market? Is this a huge opportunity to use all this space that frees up and die? Everybody always talks about the fact that there's going to be all this free space and this and the other but oftentimes it doesn't seem to happen that way. But yes, 7nm provides the ability to lay an awful lot of transistors and increasingly we will find those spent on domain specific processing and machine learning processing is turning out to be one of the most important forms of domain specific processing. So already for imaging it's a huge boost to use AI? AI is being used pretty much every single use case you can imagine but yes it is becoming popular in image processing as well. So you had some other examples of some of their concrete and people can really understand the usefulness right now, right? Yes, I mean most of the successful uses of computer vision and of speech recognition, things like that are being based on top of machine learning algorithms. So can you talk a bit more specifically what you think Leonardo can do in terms of helping in this area? Is there going to be anything about all these amazing Leonardo engineers coming up with new algorithms? I think Leonardo is the ideal place to collaborate around software in the open source and so we hope we are donating what we've done already it's about 100 man years of effort that we've put in already but we are hoping people will add to that you know they will show where they can do better or where they've got better ideas we found Leonardo to be a very productive environment for collaboration in the past. And you said that there was not enough bandwidth in the world for all this like every smart security camera streaming 4k to a cloud is going to break the internet so is it already working out the way that these are using AI or is this a lot more potential? Yeah so if we look at something like the Hive security camera it's already using machine learning techniques running on the device itself to trigger whether there is anything interesting in the video so it's continuously recording, continuously viewing and of course most of the time nothing happens but if a person comes into the frame then he thinks oh this might be interesting so it sets off a trigger it's only at that point that it might be sending the data off to somewhere else perhaps to your mobile phone and say hey look this person is standing at your front door what do you want to do about it? So is it true you were in charge of the GPU? Yes I was general manager of the media processing group in ARM for just under a year and that's been a fantastic experience I guess right? Oh it's tremendous I mean I spent 12 years involved in the media processing group I was involved in the very original acquisition which performed the basis of it and we took that business from zero to over 1.2 billion units per year our customers, our partners were shipping 1.2 billion chips per year so that was huge there's nothing I mean there's just an awesome job There's no other bigger GPU company in the world? Oh no I mean we are number one No, by a long way I mean we're probably 50% bigger than anybody else So is it a little bit like working in the startup now to do the machine learning? Is it a new kind of office? So that's exactly the way we're treating it CEO and the executive committee are very generously funding us and saying hey go and build this and we are treating it very much as a startup we're moving as rapidly as we can within the confines of ARM and occasionally we make mistakes and you know we correct those as quickly as we can we've built a completely new processor within 12 months that's never been done before and the reason we've been able to do that is we've been taking decisions very very quickly and changing them when the data says you should change your mind So what's the secret of being the world leader? What is the secret sauce? Is it something you know? If I could bottle it I would sell it for a lot of money I think there are several secrets the first is you have to start with the customers at the front of everything you do we have to build IP that will work with many many different partners one of the success stories of the GPU was our ability to take an architecture and scale it to multiple performance points because actually customers say oh I want this much performance no I don't, I want two thirds of that or oh I got that wrong I want three X the performance and so their ability to be able to take a particular performance point scale it up and down turns out to be incredibly useful and one of the reasons why customers love Mali you can rest assured we're looking at that for our neural network processor as well So why is it really all exploding right now this all this neural network because people have been kind of potentially been able to do it for a long time they have been for a long time but why is it happening now? A lot of these algorithms are you know best part of 50 years old the change is some degree of improvement in the algorithms they're not quite the same as they were 50 years ago but predominantly it's all about the data there is now available to people large data sets of very high quality annotated data to use as the training data to train those algorithms Alright so so it's happening is not just a marketing buzzword right? Right there is a lot of marketing buzzwords there is a lot of hype we are probably at the peak of ML hype but that doesn't mean it's not real that doesn't mean that it isn't going to be huge we believe it will be huge we believe it will affect everything we do internally we have a tendency to say ML is software 2.0 Software 2.0 it's all the way from the Cortex M0 plus all the way up to a self-driving car or self-driving let's say a rocket space rocket or any kind of huge chip right? Completely and so you're going to provide everything we are trying to provide everything one of the things we're doing here at Leonardo is saying look hey guys we can't do everything here is what we are doing will you please contribute as well? Thank you