 I saw so many formulas in a presentation when I was in a grad school probably around 15 years back, okay? So, this talk is going to be slightly different. At Intel, I lead a developer relation division group which predominantly talks with software companies who are developing products and then the end customers who are deploying those products and particularly in the areas of data center and the artificial intelligence and the internet of things and how to tune and optimize their solutions on Intel hardware. So, many a times when we talk to our developers they are saying, hey, what's coming from Intel next? What kind of a hardware is coming? And here is my use case and I'm trying to do a video analytics and I'm trying to include a deep learning and a machine learning and I want to do a little bit of analytics on the edge. What kind of a hardware that we should be looking into. So, what I'll cover over the next like 30 minutes is what kind of hardware that is coming from Intel for a variety of use cases. Many people may or may not know that we do have a lot of software libraries as well, okay? So, I'm going to touch upon all that and so as a developer what is that you can take advantage of that without really worrying about the features in the hardware without really worrying about writing your own code but use them and then you know faster and go into the market and then at the end I'll also cover a little bit about how do we work with the ecosystem and we do have several programs and we have one of the largest developer, you know, developer programs in the world and so we'll cover a little bit on that. So, it's going to be slightly different than what you've heard in the first two. Okay, so how many of you have seen movie iRobot? Okay, I think most of us, right? So, I saw it too probably around 12 to 13 years back and I mean if you guys remember particularly there is one scene, okay? And by the way, the movie is about robots and then there's a scientist and then who developed robots, they're intelligent and then one of the robots basically breaks the commandment or robot commandments and then basically become a rogue and there's one scene that I remember vividly is where Will Smith is going in the tunnel in his autonomous car, okay? And then he's actually talking to the car or computer and then asking some data which is investigating on the depth of that particular scientist and so he's talking and then he's getting answers and suddenly, you know, there is a couple of trucks come and then he presses up, you know, a button in the car and then the car asks him, are you sure you're going to do the manual drive? Okay, and then he says yes and then he takes the control of the car and then, you know, rest follows. This was a sci-fi, you know, probably around 12 years back. It was a science fiction even around probably six years back but if you really look at now, it can actually become a reality and we are seeing particularly in the automotive driving as one of the key segment for at least our growth and without artificial intelligence, it would not have possible and the movie, the plot, everything is in 2035 and, you know, if all goes well and then, you know, we all do our job well hopefully we'll be able to bring it back by 10 to 12 years, you know, that's okay. Okay, so why it's happening right now? You know, artificial intelligence has been around for a while but if you really look at it, you know, the hardware capability was not really there to really take advantage of and then, you know, put it into the use. But if you really look into thanks to the Moore's law and then the hardware innovations that have happened we are now able to really see and then, you know, train the models and then in the time frame that was acceptable, you know, it's acceptable and able to really, you know, move this particular artificial intelligence around. Particularly data, so you need a quite a bit of data and the event of IoT and then the, you know, various of the smart, you know, devices around the world which may be around almost 30 billion devices that are coming and generating all sorts of data the data available to train the models to be accurate, you know, really has been increased quite a bit and we really see that some of these innovations are really driving, you know, the usage, different usage models in the AI space, you know. We've seen some of these major transitions, at least in Intel we see that, you know, from mainframe to traditional X86 and then X86 standard-based servers into the cloud and then we see that this artificial intelligence workload is the next big one. Today, when we see the overall number of servers deployed in the world, around 7% of them, they are running the machine learning and deep learning workloads. Out of that 7%, around 60% are running the traditional ML workloads and then 40% of them are running probably, you know, the deep learning workloads. It's very, very small, you know, compared to what's possible and we envision that the AI compute cycles it's all the way from training to scoring to inference and then, you know, at the edge and then at the cloud. We're going to see almost like a 12X, you know, a growth in the pure compute, okay, that artificial intelligence workload will run over the next few years. Okay, this is just, you know, some data points, it's staggering. The people who are sitting here, you know, probably checking their WhatsApp and then tweeting or, you know, looking at video, typically generates around 1.5 gigabytes per data per day. It's 1.5K, right? And thanks to the 3G and 4G, the consumption of the video and the amount of, you know, video calls that we do, etc. is really contributing to that. But if you really compare that to a smart hospital, okay? You're almost talking about 3,000 gigabytes per day, okay? And if you look into a self-driving car, you know, the one which I talked about a few minutes back, that's going to generate 4,000 GB per day. So just imagine and all sorts of structured, unstructured, you know, somewhere in the middle, all kind of data will be generated. And if you consider some other use cases, it will be even more. So this is going to really grow and then, you know, AI with the help of the compute and the software innovations and the work run around is going to really accelerate significant amount of the challenges or, you know, solve problems, which we would have otherwise would not have thought of, you know, closing. And if it's going, traditionally it would have taken like, you know, decades, you know, with the help of artificial intelligence, particularly in the areas of, you know, finance or healthcare, you know, I think that one more practice going on in healthcare is going to solve that problem in a much, you know, faster way and to really help all around, okay? And particularly if you look into India, okay? And as I said, my group works very closely even with the government agencies, the lot of software companies, and then I'm also part of the NASCAR, you know, product council. And so we work with, you know, several deep tech companies in the areas of machine learning deep learning. These are some of the areas that is where we see a lot of opportunity, okay? If you're looking into the government, you know, security, I'm not sure how many of you know, but government has actually done 109 smart cities, okay? And, you know, multiple are already in happening. And one of the key, you know, usage, one of the key things in the smart city is a DSS segment, which is a digital surveillance security. And our team is working with multiple software companies who are doing video analytics, okay? And today, most of those video analytics is predominantly a traditional analytics where you take an analog camera or a IT camera and then you do something. But in a few years, once the smart cities are there and developing, you will see the machine learning deep learning algorithms coming into the both edge and the server to really make it even more, you know, meaningful. So I think this is one of the areas where we see, and we are already working with many of the companies who are at risk. In addition to that healthcare, I think in India, there are 300 or so startups, you know, which are working on machine learning and deep learning. And healthcare is one of the key things. And if you look into India, you know, people are calling it as the diabetes capital of the world, the amount of cancer patients and the heart patients is going to be significantly thanks to our lifestyle and the vadas and the bhajiyas, et cetera. I think healthcare is going to be one of the very important factors. I think somebody was telling me, you know, when I was talking that there is a company in India which basically takes the photo of your tongue and then, you know, do some, you know, image recognition, et cetera and tell you whether you are anemic or not, okay? And then similarly, there are so many different such companies, they are going to come out. You know, somebody is going to either predict what's going to happen or give you a personalized care. So healthcare is going to be one, you know, important factor. The third one, what we are going to see is a huge opportunity, particularly for India, is IT, you know, particularly at the chatbots and all the customers. You don't really need to go in and talk to a human being and 24 by 7, you are going to get, you know, over there. So those are some of the segments where we are seeing important and it's no different, you know, outside of India as well. So I'm going to skip this. I think I already talked about a little bit. Let me go right into, you know, where we come in. Okay. So in Intel, we are calling it as a virtual cycle of growth. What we are seeing is with the huge amount of devices that's generating data, you know, that's pushing all in back to the data center to go and then analyze. And the more analysis that you and the data center and then you make really, you know, give value to the users, again, that's pushing more and more end devices. So this is going into a virtual cycle. Okay. And this is a huge opportunity for us. And what we are seeing is some of the technology is like 5G, you know, which is already there. We're already working with some of the players. That's going to really revolutionize particularly the use cases as autonomous drive. Okay. So if you look into us, we are, you know, having a portfolio all the way from hardware to software to communication technology, which is going to be very critical in deploying whether it's a training, whether it's an inference, you know, these AI solutions and technologies, you know, in the world. So just to give you a few, you know, examples in the hardware space, traditionally, you know, there are specific Xeon servers, which are running a typical xxx machine learning deep learning algorithms. And there's a many core architecture, which is, we call it the Xeon 5, which is the smaller amount of course, but many of them, and which can really take advantage of the huge parallelism and the vectorization if you can do and then get you much better performance, particularly from a training side, etc. And then if you're really looking at the low latency, particularly when you're getting stream of data, which you have to really crunch and then do something about it in like a stock pick, etc. And that's where, you know, a very low, the hardware such as FPGAs, which is a field programming data is, might be a more, you know, useful there. And then finally, if you're looking into the edge, for example, I talked about a video surveillance where you actually have to do some kind of an analytics at the right edge. We do have some hardware out there, which is going to do it for you. So in general, it's all the hardware and then software is going to be very important. So we have done some acquisitions, you know, in saffron, which is a reasoning system. And then we started to, you know, incorporate them into us. And then we're also, you know, looking into how other companies like Nirvana and Nirvana is one of the world leaders in ASIC. They are going to have a specialized hardware, which is, you know, speed up your training, model of the training quite a bit. So across the spectrum, and then we are also working on the 5G communication, which is going to be important and then the memory. So just to give you an overview of what kind of a portfolio that we have, that a developer can take advantage of it. Now, when we talk to the developer, they say typically bunch of the, you know, pain points. They're saying, look, you know, hardware is changing. We may not understand the complete, you know, technologies that are coming into the hardware. And sometimes, you know, when we do, when we train, when we deploy with a customer, maybe they're having a different hardware. So, you know, they may not be really skilled in optimization and really going that deep level and writing that, you know, the core. The second thing they say is, hey, I'm developing a particular AI solution, I have a core algorithm and I know I'm very skilled on to it, but I do need this specific libraries, which I may not have, which either I match make with some of the other ISVs or an optimized library that I can make use so that I can focus on my algorithm and then develop my use case and not have to worry about all that. So that's a typical other thing that come to us. The third typical pain point that come to us is, you know, I'm not getting any performance and my training time is taking way too much time, et cetera. So what we have done is we have kind of looked into that. So typically a developer, if you look at it, is on the top. He's basically creating, you know, use cases or an application. And then, you know, if you look into the four levels below is where, you know, we see kind of a hardware. Okay. So I talked a little bit on the hardware side already. So I'm not going to worry about it. And we continue to, you know, come up with a new hardware over the next few years. And then on top of it is our libraries. So not many people know that we have a holistic set of library. For example, we have got our own Python distribution. So all the sci-fies and the numfies, they have been optimized on our hardware so that it can take advantage to give you much better performance. Intel DAW is basically data analytics and then acceleration library, which takes to a traditional machine learning, you know, stuff like a pre-processing, for post-processing analytics. We have taken a bunch of those and actually optimized for the underlying hardware here so that you can directly write out to that. Okay. And then the next one is basically the math kernel library and the math kernel library DNNs, which we call the Duprude Networks, MKL and MKL DNN. These are those real primitive, you know, algorithms which really require writing it to the hardware, writing to the instruction sets that are available in the hardware so that you can get the maximum performance out of it. So there are multiple of these primitives and MKL DNN is actually open source, one can download and the data is available. I'll tell you where you can get all this information in the next few slides. Intel MLSL is basically, you know, a scaling library, right? So if you really look into the training, which is going to be extremely comprehensive, many of the people are also looking into a cluster of machines where they train. For example, if a particular single node takes, you know, n hours to, you know, basically train your Duprude Network model, if I have an 8 node, if I have a 16 node or 64 node, the time to train is going to be significantly lower. Okay. And so as a developer, what you need to do is to basically make sure that the training model actually scales very well as you increase the number of nodes, right? And as you increase the number of nodes, your total work that is getting done by a node will come down and the communication will increase. So you need to take care so that, you know, you get the scale. So this particular library, you know, what we work on is optimizes that. So you, as a developer, don't have to worry about, you know, all that. So we will take it care for you. And then the Intel Nirvana Graph, it's going to be a new one. It's not out there, but it's actually a graph compiler which is going to take your data flow and then compile it into a hardware agnostic, you know, thing. So that's going to come out a little bit more sooner. So, you know, these are some of the examples of, you know, the software libraries that are available. So check them out. In addition to that, you know, people typically don't write even that to that level, you know, most of the, you know, ISVs or most of the customer that we go into, they're all looking into frameworks, right? These are the, you know, frameworks that pretty much everybody know, like, you know, the TensorFlow and then the Cafe, etc. So we have taken those and actually optimized on the Intel hardware, okay? So if you're using a Cafe, all you can do is, you know, go and get an Intel optimized Cafe and I got a couple of use cases in how we're able to get the performance and you'll be able to get the performance on your hardware, assuming that you're running on an Intel hardware. Okay? So all of these, we've taken advantage of that, you know, Tiano, Torch, Cafe, TensorFlow, etc. And even, I don't know, not sure how many of you know, in Japan, you know, China, it's one of the open source, you know, framework, it's been used and we're also working on that one. So again, you know, something for you to go and look into if you're already running it. Are you running on an Intel optimized Cafe? If not, go and get it so that you can get some performance right there. And then on top of that, we do have some special SDKs, okay? And we have got something deep, deep learning SDK. I'm not going to many detail, but then, you know, you get an idea, we have got a CV SDK with all those primitives already implemented and optimized on hardware. And then, you know, and, and we just released something very interesting called as Movidias. Movidias is one of the company that we acquired. It's a deep, you know, it's a compute stick. Not sure if you guys have heard about it, but it's as good as a compute stick which can plug it into your laptop and then it has got a CNN optimized and everything. If you want to do a quick prototype, you're able to do right on your laptop. So that compute stick you've got available instantly, okay? So one thing is, you know, just if you're running on Intel, go and then check these guys out and then so that you can get much, you know, better performance. Some case studies, I'm not going to the detail, you know, China, you know, a cloud service provider, real cloud service provider that we worked with. They were basically using the adaptive open source BVLC cafe with a CNN framework. And then when we worked with them, and then basically, you know, with using of math kernel library and also Intel optimized cafe, we're able to decrease their training time by 30x. I mean, this is not simple like a few percentage, but actually you're going to see a much better performance when you use some of those. This is the second one. Again, you know, if you look into on the left hand side, this is a combination of a hardware and software, right? Because we have many customers who are using different level of hardware. The SKUs are different, etc. So if you look into the left most thing, it is using a two socket. This is a two processor, Xeon processor with F2 blast. And then when the second line is basically when they use MKL, which is a 2017 version of MKL, they're going to get straight away three and a half X, you know, performance improvement. And then they went to the newest hardware, the next generation of hardware server, and then use the MKL, which is already optimized for the newest hardware, they were able to get even further two X. So almost like eight X from the, you know, from the beginning. And then when they went to the third generation of server, which is the latest generation that we have, they're able to get almost like around eight in X. And this is the, these are the actual use cases that, you know, we have. So again, you know, a combination of a hardware and software can make a lot of difference for you. And so make sure that, you know, you look into some of those. So the particularly if you're having the challenges in the performance area and getting real time, whatever your goal that you have, and then you'll be able to, you know, get those. One of the use cases that we're very, very, you know, excited about is an opportunity that was tried. As we speak now, we are working with BMW, which is one of the, you know, big company. And also we are working with Audi, you know, recently they announced that we are going to work with Audi and other car makers. And we have seen that, you know, this particular, you know, segment is growing faster than we actually think. Recently, we also announced that we're going to look into getting a mobile line, one of the Israel based company who are the world leaders in, you know, some of the leaders and then the vision based thing. But if you look into how I can use this end to end use cases, where there's an inference in car, okay, what we call in a software defined cockpit, which is going to be in the car, which is going to do some of the compute and all the way the models and everything which is trained in the back end. And then so that, you know, we have some of these technologies that can be used end to end. And in the economic driving, particularly the communication in the low latency is absolutely critical, because the car has to take really, you know, a decision right away, whether it's, you know, a person or a dog or a cat or somebody is driving, you know, all those are going to be very, very critical in the latency because critical. So we are spending a quite a bit of time G, you know, 5G and go from there. So this is just a one end to end use case, how you can take the thorough of the technologies from software to hardware to compute and then the connectivity and then basically get your use case driven. And this is just one example that can be many other examples, you know, going forward going into the little bit more details into the hardware itself. Sometimes it's not just one particular hardware, but you also also you can take a combination of a hardware to get, you know, your models trained. So just few examples on the on the top hand side is a training the models and then the, you know, the bottom is the inference. For example, if you are training, you know, a sample simple machine learning, then you can either go for any your training time quite a bit. Okay. So this is going to come out and which we are calling it as a late rest, you know, later this year. Okay. And then maybe you'll come to know more about it over the next few hours. But we are really looking into that ASIC to reduce the data. And if you look into the inference, which is in the bottom, if we don't really need a real time, then you can go with probably an FPGA plus a Xeon if you got to have a day to train and then, you know, put in the model, it's not really real time. But if you really look into the real time that I read absolutely low latency, and then for example, you know, just a stock prediction. And if you're stream of data that's coming at you, you know, high speed and then you go to really make sense of something really fast and something like an FPGA with a really highly optimized algorithm, we're going to give you, you know, a much better performance. So I think there are various of these models that we need to look into. And when you're actually developing or deploying and customers, typically those are the questions that we get asked a lot is in, okay, what can I use? What is available? How can I bring this time? Typically, they come up with a problem and we go and they try to help them with what architecture and what technology that we have. Okay. So this is the next version. So the Lake Crest version is going to come later this year. But as we move along over the next three years, one thing what we are trying to do is the specialized ASICs, okay, which are highly optimized deep learning machine learning, I don't think we are going to integrate right into the Xeon, you know, die. Okay. So you'll get to see that integrated acceleration that is coming in. And we are expecting that almost a reduction of 100 X in the training model itself. So you'll see a lot of those coming in that more information over the next year or two, but watch out for some of those real innovations or changes that are happening in that field that might be interesting to few of you. Now, this is basically about the product, but my day job is to work with the software ISVs and then the companies that I talked about, how are we working with them? And then how are we working with the ecosystem? And you know, over the next couple of slides, we'll talk about that. Most of the information what I talked about, we have one of the largest developer program called software.intel.com. This is the, you know, where we are working with almost around, you know, six and a half thousand ISVs worldwide and over 20 million developers come to that website, okay. And we have various different modules in addition to obviously artificial intelligence, which is going to be hot. We have technical computing, which is HPC and module code. And we have gaming, we have internet of things, various of these different themes where developer come in and get educated. We also have a lot of sample codes. We also have a lot of tools that you can download and then try them out. And you also have a lot of forums where you can actually post your question and get answers pretty closely, okay. So this is a one stop shot software.intel.com, you know, if you're not known, if you don't know about it, go and check it out. But if you really look into AI, you know, okay, I talked about some of the hardware software, if you want to go and say, hey, I'm interested in DL SDK or the CD SDK, what can I do? And this is one of the areas which we are calling as Nirvana AI Academy. And this is again a portal where you can go and get all those technical content, you know, available right from the videos, the, you know, the blogs, and then you're going to get a lot of white papers, the code samples, you know, the tools, libraries, everything. It is there in the AI Academy. And also there's a free and then that's also a member against free. You can actually become a member and get a lot more additional resources and get direct connect with some of the, you know, people within Intel. Okay, again, if you're really interested in it, the tools, the, you know, the community, we already have a lot of developers already on here. So, you know, you can go and then check it out. Another program that we run, I think we've seen quite a few success over the last few years is what we call as Intel Innovator Program. So a lot of these, you know, software developers, they're actually very active in the community. They're going and then doing a lot of meetups and then, you know, doing that. So what we do is we basically, you know, make them under this program. We have around, in APJ alone, which is Asia Pacific in Japan, which I run, we have around 30 odd Intel, you know, innovators in there. And then out of it, in India alone, we have about 20. Okay. These guys are typically the guys who are very interested in understanding more and they get a lot of benefits by becoming an Intel software innovator. For example, they're almost like an NDA where we give them a lot of early technology, you know, some of these are what I talked about, you know, we give them almost like what we give it to our customers, etc. So they will know what is coming in. They'll also get the remote access to our hardware. So we do have a bunch of clusters that are available. So if you want to try it out, you know, you can be an innovator that has access to that hardware and then run a bunch of your experiments on top of it directly. We also get, you know, the tools and then we also, you know, get a lot of these technical content. That's, you know, a very big. The second thing also you get is, you know, a lot of potentially end exposure from a brand perspective and if you want knowing that community of innovators, everything, that's another one. And then if you really, you know, going ahead and then, you know, attending some of these meetups and other things, we also support you. So if you're not aware of it and if you're interested, you can check it out. It's again in terms of, you know, I think they are going to have around two to three innovators here today. If you want, you can talk to them about it and see, you know, if you're interested. I'm not going to cover quite a bit. We do have a special thing for academia. I'm not sure how many people are here from academia, but there's a slightly separate program that we run. Again, it's a very, very similar and then again, this is for the student ambassador. So finally, again, you know, I told you that in the beginning, it's a slightly different talk wherein I'm going to touch upon you with, you know, what is Intel's vision where we see some of the, you know, opportunities, particularly that where you can develop some of your solutions, et cetera. At the same time, you know, what kind of innovations or what is the idea that we are doing, particularly around the hardware on which predominantly most of these artificial intelligence workload would be machine learning and deep learning kind of a ramp, okay? This afternoon, there's a very detailed technical session. I just gave you a very, very 50,000 feet, where in one of my co-workers, Kesh is going to come and really drive down into all these different things. So if you're not aware, you can check it out, et cetera, 3 p.m. This is going to be almost like a more than an hour workshop going into deep into some of these aspects that I talked about. And then again, this is going to be really, really exciting. If you've seen, if you're not seen anything in the last, you know, 3, 2, 5, you know, years, I think the next 10 years is going to be one hell of a journey. So I'm glad that you guys are all part of it and together we can make something happen. Thank you and then hope you enjoy the rest of the conference and then good weekend.