 Well, good afternoon all. First of all, welcome to Intel's AI launch. My name is Bhushan Desam. I am the Senior Business Development Manager at Lenovo's data center group. My primary focus is in artificial intelligence business. So, I mean, there's a lot of excitement around artificial intelligence in recent years. But at the same time, there are several fears too, especially when it comes to automating everything that we do today. And also, we're losing control on some of the things that we're good at. But there are also a lot of social benefits that comes from artificial intelligence. And I'm going to make a case for that, the AI and computer vision for social good. So, if you take that picture, what do you see there? A man and a passenger van. It's very simple, isn't it? Quickly. So, what is the context here? So, a man talking on the phone is about to hit by a van while crossing the road. And what would you do? You run away to avoid the van, and you just want to make sure that there's no traffic on the other side. That's the common thing that you would do. Instantaneously, without thinking much. And what is e-power 91.2345 plus 25 power 2.2? This is a hard problem to solve for humans. So, until recent past, if you ask the same thing to computer, if you say, what do you see? Doesn't have a clue. What is the context? No clue. What would you advise? No clue. But if you ask the same math problem, it's saying, now we are talking. This one is easy. 4.1941 times 10 power 39. So, who are intelligent here? Humans are computers. The computers who could actually solve the hard math problems are the humans who could instantaneously analyze the situation, reason it, and take a quick decision. Who is intelligent? So, let's just, you know, quickly look at what is intelligence? So, if you look into a Webster dictionary of intelligence, it says the ability to learn or understand or to deal with newer trying situations, reason also the skill use of reason. So, that's what intelligence means. So, if you take this and apply between humans and computers, so, if you see humans are very good at sensing, you know, so when we look at the picture, we could immediately sense there is a man and, you know, like there's a car. So, and then we learn. We learn from our experiences or we learn from others' experiences. You don't have to hit by a car to know that it's a dangerous situation. You know that, you know, like you somehow saw this, you read somewhere, somebody got killed, you know, when they were run by a car, run over by a car, so, and then you reason it and then you act it, you know, so you immediately run and then you adapt to a situation that you don't want to run into another car but you want to run away from that so that, you know, you are safe. So, it's basically humans possess and, you know, like exhibit intelligence. You know, that's what intelligence is. Whereas, until recent past, what the computers are good at is basically doing complex math operations, data storage and retrieval, you know. So, it can store terabytes to petaflops, you know, like petabytes of data and it can retrieve it in milliseconds. So, that's what the computers are good at. So, there is a clear division of flavor until now, you know. So, humans do some things, computers do some things, you know, the things that computers could do, we can't do, you know, the things we could do computers can't do. So, that's actually a very nice division of labor between us and computers. So, what are computers good at? You know, so the computers are good at following rules created by humans. So, if you take in traditional way the way we interact with computers, you actually write a computer program that has some logic and what the logic does is it codifies rules. You know, what rules you want the computer to follow. And when you give those rules through a program, the computer executes that program and it gives you, you know, whatever the desired output. But the real thing is what computers are good at is they're very good at following rules created by humans at incredible speeds, you know, like so that they can just execute, so you know, millions and billions of operations, you know, like in a second. So, that's what computers were good at, you know. So, that's just what we've been used to. But that's about to change. So, this is a very popular visual recognition challenge in the computer vision community. It's called ImageNet. So, ImageNet is a large database of images. It has about 1.2 million images that belong to thousand different categories. So, the challenge is if you, if the computer actually can look at 100,000 samples and test what that image is, you know, be able to tell what that image is and at what accuracy, so that is a challenge. So, if you look at the error rate, the error rate in doing these 100,000 tests, if you historically track that, in 2010, the error rate from computers were about 28%. So, that is the error rate, you know. So, they were getting, you know, 72, you know, like right, but the 28 wrong. And then, you know, the error went down, it keep going down until 2014, but something happened in 2015. So, this is when Microsoft researchers could actually bring the error rate even below the human level, slightly below the human level. So, the human error rate is if I show you that thousand, you know, 100,000 pictures, you could get 95% right and 5% wrong. But this is the first time the computers did this at 4.9%. So, the slightly exceeded the human capability and in 2016, it went further down. So, basically, that in 2015, you see this whole thing turning around. So, there is the press went wild. Sorry, I'm another person who's blaming press, but they're just saying, hey, A is here, is going to take over the job, is going to change everything. So, that was the main theme of the story. But also, keep in mind, it's only doing one task. It's not doing everything, isn't it? So, all it is saying is, hey, this is a picture of a cat, this is a picture of a dog. And that too, in those thousand categories. So, if you find something that's outside the category, it says, I don't know what it is. Simply, I don't know. So, it's very, very narrow. You know, keep in mind that. You know, it's not solving every problem. It's just taking one data set of images and kind of trying to find what this image belongs to. So, is this artificial intelligence, really? So, let's look at some definitions of artificial intelligence now. So, we look at intelligence, but what is artificial intelligence, isn't it? So, artificial intelligence is basically the intelligence exhibited by machines or software. So, this is the Wikipedia definition, very general definition. And we already reviewed what intelligence is, you know? So, the ability to sense, learn, act, reason, you know, like adapt and all of them are considered intelligence. But another technical definition of artificial intelligence is the study and design of intelligent agents, where an intelligent agent is a system that perceives. You know, that's the key thing, you know? So, it actually perceives the environment. You know, it tries to look through, it perceives the environment and takes some action that maximizes its chance of success, you know? So, perceiving and acting, those are the two things. So, let's take this and try to apply to this ImageNet Challenge. Is this really artificial intelligence? There is actually, in fact, there are two kinds of artificial intelligence. One is narrow artificial intelligence. The other is general artificial intelligence. There is a big gap between these two. So, I think, you know, one had to realize that. So, the narrow AI is today is driven by industry, you know, like both large, you know, like large industries. And also, there's a lot of startups working on it. What it does is, it tries to address one specific, you know, like area, such as either it recognizes images, it recognizes voice, or it tries to, you know, like, you know, translate the language and run business analytics. So, that's the kind of, you know, like typical use cases for the narrow AI. So, today it's already deployed by hyperscale companies like Facebook, Google, you know, like Microsoft. So, when you upload an image, you know, Facebook image to Facebook, I mean, an image to Facebook, it suggests some tagging. That's how it is doing. You know, it's actually doing image recognition, you know, underneath. And, but now it's actually finding very good, you know, applications in medical diagnosis, education, finance and manufacturing. So, I'm going to review some of those applications. And it's poised to create some good societal and economic benefits. So, that's the narrow AI scope is. That's what actually happening today, you know, like in overall. But there is this general AI. It's kind of mainly driven by academia and scientists, you know. So, what they want to do is they want to, they want computers or, you know, artificial intelligence to come to a level where it really exhibits the intelligent behavior that we exhibit. You know, taking multiple things together, like, you know, sensing, acting, learning, reasoning and all of them and how we react to a situation. We want the AI system to react to a situation. But this is where, you know, if you track the history of AI, it went to booms and busts, you know, like twice. Probably perhaps, you know, we are not there yet. And there's still, you know, like little progress and it needs decades and decades of research in this area. And, but it has important and complex implications to society. But we are not there at least, you know, couple of decades. But even if we get there, there are always, technology always poses several challenges, isn't it? So if you go back very, very early days, isn't it? So the steam engine actually revolutionized the way, you know, like the muscle power is replaced by, you know, like mechanical power. So that has huge implications in productivity in the society. But at the same time, the steam engine was actually powered by coal. Coal was actually putting carbon dioxide into atmosphere and it's believed to, you know, cause this global warming. But then, if you compare to, let's say, 20 years ago from, you know, coal power plant versus today, it's only putting 1% bad stuff out. So 99% is clean. That is again driven by policy. And the policy, you know, like, in fact, drove environmental technologies to clean the bad stuff that's coming from the coal power plant. So it's going to be an evolving field. There will be lots of technological progress. But at the same time, there's going to be policy changes. And they all have to come together really to deliver the right benefit. Either it's the general AI or the narrow AI. So let's come back to the ImageNet challenge, isn't it? So this is the first time computers beat us in recognizing images. But what made it possible? So there are three key factors in there. One is the computing power itself. Today's computers are very fast. There's tons of data that's available today compared to, you know, like five years ago or 10 years ago. And there are several algorithmic advances, too. You know, so those are all three of them. They came together to, you know, like make that kind of progress. So let's review each of them. First of all, I mean, we are coming from Lenovo, so we want to talk about computing power, isn't it? So we always say you need more. And if you really, what's showing on the right-hand side here is, on my right-hand side is, the top 500 supercomputers in the world. So the curve on the middle one here is actually showing how individual computers, you know, like the number one is actually growing at the capacity. And the one in the bottom one is the 500th computer. What is the power of the 500th computer? And the top one is, if you sum up all the computing power from top one to top 500th computer, if you sum up all of them, that's what the capacity is. But keep in mind that, keep in mind that this is a logarithmic scale. It's not a linear scale, isn't it? So do not take this advances lightly. The reason I'm saying is, if you take today's CPU, if you take just one Intel CPU, that one Intel CPU can almost deliver one teraflops per second. So it can do one tera, you know, like flops of work in one second. And interestingly, if you tracked back 25 years ago, if you take all top 500 supercomputers, I'm talking about supercomputers, not the, you know, like consumer computers, in the world, if you combine all of them together, that was only delivering one teraflop. And if you take the same one teraflop, 20 years ago, the fastest supercomputer on earth was delivering what your one CPU today is delivering, the astonishing progress in technology. And see where we are at today. We are today, the fastest supercomputer in the world is actually in China. So it's a national supercomputing center in Wuxi, China. That delivers about 100 teraflops of computing power, per second. And the US is actually building the next gen supercomputers called Summit and Aurora. So they are going to be in that kind of 100 teraflops per second of computing power. And the one here is actually the Marconi supercomputer that Lenovo and Intel built together for the Italian research institutions. It's about 11 teraflops per second, this machine. So this is in Bologna, Italy. And this is the world's number 12. You know, so if you look in the 500 list, it's the world's top, you know, 12th computer here. So if you can see, just to sum up the computing power, it's going up and up. But at the same time, the prices are also going down. You know, for the same dollar, you can actually afford more and more computing power. So the next one, the advance is the computational side or algorithmic side of it. So there are these deep neural networks. So the neural networks are actually inspired by human brain. So what I mean is, you can see that there are multiple of these, you know, like circles or, you know, like all these squares. You think of those as your neurons, you know. So this is not exactly, it doesn't act like a neuron, but, you know, it mimics a neuron. And when you feed something to the neural network, you know, so you take one, some image. So the first layer is going to keep the very, you know, like low-level features, you know, it try to, you know, like immediately detect your edges. And then, you know, like it tries to put some features in there. And then, you know, it try to capture our face. And by actually creating these abstracted layers, it can actually keep a lot of information in there. So what happens typically in a, you know, like when you develop an application is you build a neural network and you keep training with the data. So the best example, you know, like everybody says in deep neural network is a cat, it's not identifying a cat, you know. How it came, nobody knows. It's like, hello world, you know, like, why do you, the first program is a hello world. You don't know why, isn't it? So, but imagine you are feeding a million cat images here. So, when you keep actually throwing that image there, then it kind of creates an abstraction, saying, oh, this is how a cat looks like. This is exactly how we learn when we grow up. So the first time, you know, like we get this nice, you know, like book from a parent, saying, hey, this is a cat. They don't say a cat has, you know, like two years, you know, like it has four legs, nobody tells us. So we kind of start creating an abstraction. And we learn by examples, you know, like after a hundred times, we'll say, hey, this is a cat. And then we slowly, you know, we start learning, you know, like we learn the context, you know, like, and then there goes on. So the key part here is learning, and also learning by example. So that's the key part. So that's how the deep neural networks actually work. We talked about computing power. We talked about algorithmic advances. So the next one is the big data explosion. So there is a lot of data today. So typically when you talk about big data, there are three elements, the volume of the data, the variety of the data, and the velocity of the data. Look at these numbers, 44 trillion gigabytes of data by 2020. 3.31.25 million messages, 7,200 hours of video, hundreds of sensors in cars, 500 million tweets, 6 billion devices, these are like huge numbers. And that's the volume. When it comes to variety, you have now images, you have video, you have, you know, like text, you know, like there's so much of the variety in the data. And also the velocity. When these hundred sensors are pumping up, it's pumping up very fast. And also look at, all of these numbers are in minutes, in hours, in days, you know, that's a huge amount of data coming. So the pace of the data, it's also very high. So the real, you know, like challenges, how do you actually make sense of this data? But at the same time, there's an opportunity because these deep neural networks, they get better with the data. So it's really a challenge, you know, like this one challenge is actually creating an opportunity. I mean, that's how nature is, isn't it? So, okay, so I briefly mentioned in the beginning that there are two common fears when we talk about AI, isn't it? One is AI replacing humans by automating tasks that we handle well today. And also, AI exceeding human capabilities and taking over the world. So those are the two common, you know, like fears. So this is a very good report, you know, like if you guys are interested in that, you know, I highly recommend you to read this. You know, it's a very easy to read report. So this came out in October of 2016. This was published by a committee from National Science and Technology Council. They advise the US president on what is coming up in the future, how it has implications, you know, like for our society, whether it's, you know, like employment or, you know, like environment, you know, whatnot, you know? So it's a typical body that advises the president. So basically, what the report notes is that the best way to attack some of the jobs in the future is actually to pay humans with machines and situations where the human partner cannot compensate for weakness and the computer are vice versa. So let's see if we can actually work together, you know, like with the humans with artificially intelligence-powered computers to kind of, you know, like better the outcome. Would this work? So let me work to an example. So the dilemma here is, you know, is that humans are AI, you know, so which one to pick? So this is a research done by some of the MIT scientists and also Harvard Medical School folks. And what basically is they're trying to address a challenge. So the challenge is the detection of breast cancer from the images of lymph node biopsies, you know? So they take these biopsies, you know, they do the imaging and then you want to see if there's a tumor or not. And when they did that with artificial intelligence to just classify whether there is a tumor or not is a classification problem. The computers were doing about 92.5% accurate. So that's the result from the computer. But when a pathologist actually looks at the same image, he was actually giving a 96.6% accuracy on that. And the next category is the localization accuracy. That means is you take the image, you need to identify in which part of the image is the tumor in. So you need to actually, you know, identify the tumor. So again, AI is doing about 70.5% whereas the pathologist is doing about 73.3%. They're getting close, you know, AI is getting close, you know, to the pathologist level. But even the most interesting part is when the pathologist was assisted by AI, the whole accuracy improved to 99.5%. That's a damn good accuracy. I mean, that's actually 85% reduction in human error rate. So how good is 96.6%? In some cases, it's not. You don't want to be diagnosed, you know, you could be that 3.5%, you know, like you may fall into there, you may be wrong diagnosed, wrongly treated. That has a lot of implications on human lives. But look at that accuracy, 99.5%. That's very good accuracy. So I mean, we could work together, you know, so you can actually bring the humans and artificial intelligence to basically say that you can augment your intelligence by using artificial intelligence. So that's the message. So it makes sense. So we talked about healthcare, how we can improve healthcare. So let's look at a social issue. I'm not sure this is the right place to talk about it, you know, because we are in a bar, okay? But this is coming from national issues on alcohol abuse and alcoholism. And what it is saying is underage drinking is a serious problem in the United States. It's actually not just in the United States, it's throughout the world. You know, like even in developing countries, you know, we see this. I'm originally from India, you know, like I see this problem too. You know, it's not just one country's problem. And if you see the data that was actually shown by them, it's saying the eighth graders are consuming 9.7% of them are consuming alcohol at some point, 21.5% in 10th grade and 35.3% in 12th grade. So they don't drink frequently, but once they drink, they drink in excess and they have, you know, they pose serious health and safety risks. So this is a serious problem. So let's see if we can apply the artificial intelligence to identify or solve this problem. So I don't know, like if any of you have a chance to check out the demo that Lenovo is showing there, it's basically a demo to identify your age. So keep in mind that, you know, that's a work in progress. So we only fed 20,000 images to that. You know, so it's not complete demo yet, but we are just showing the early potential for that. So basically what it is that it can actually estimate your age based on the features, you know, like look at, you know, like how is your face is, you know, like, you know, based on that, it kind of guesses your age. And basically you can use that to identify, you know, persons like that are under 21 years in an image or video. So that's one example of, you know, like, identifying that. So the second one is, Intel is actually demonstrating a demo right there. So what this does is it looks at a scene, it localizes things, and then immediately it identifies people, it identifies objects like bottles. So again, you can actually train the system to identify the alcoholic beverage, you know, like containers, you know, like the wave beer comes, you know, like they're only finite number of shapes. So you can easily use them to identify those kinds of users. And then there is a startup that's an MIT startup in Boston, it's called Netra. What Netra does is it uses computer vision to identify brands and pictures and videos. So when you show this picture, it's actually picking up that Ray-Ban and saying it's 100% accurate that's a Ray-Ban, you know, like so imagine that. So you can actually, again, train this to identify popular brands of alcohol, you know, whether it's a Stella or, you know, Budweiser, you know, like Budlight. So that's how you can actually identify those brands. So imagine you can actually put all of these three together, you know, so age identification, container detection, brand recognition, and if you build a tool, this can actually help some of the enforcement, you know, like agencies, especially public places, you know, in large crowds and large public places to identify underage drinking. So just an example, you know, like how, you know, all of these technologies can be, you know, like put together to solve some of the problems. So going back now, you know, like we look at a healthcare, we look at, you know, how it can address a social challenge, but let's also look at how it can deliver the overall benefit to the society, you know, like at large. So this is where we need to review the history, you know. So if you review the history, and if you look at really track the GDP per capita, and it stayed almost the same, you know, like until, you know, like 17, you know, late 1700s, but then it just ticked off. It took the hearty stick, you know, the hectic stick pattern. So it is actually the steam engine that was invented in 1775 was actually credited for this industrial revolution that gave us like massive productivity and human prosperity by basically replacing the muscle power with energy from natural resources. So this is where, I mean, you can basically have as much energy as you want, wherever you want, rather than imagine, you know, like you need to take thousand people to do some projects somewhere rather than, you know, like you take a mission there and like do it, you know. So that power actually gave a lot of ways to improve our lives and also it helped the economy. I'm not saying it doesn't have, you know, like bad implications. Again, at the time, there's a lot of people like who are doing human labor work and they were displaced, but then they kind of, you know, transformed and found work somewhere else. So a lot of muscle work now we actually move to knowledge work. And the next generation could be, you know, something else, knowledge work powered by AI, you know. So that could be a trend, you know, like so going into the future. So the key thing there is, it's actually the steam engine took the natural resources and actually delivered the value. So today, we, you know, humans, we created a big resource here, you know, like in the digital age, it's called the big data. So that's the one that I was talking about. So you are creating lots and lots of data. This is a man-made resource. So is there a value in that? So if you see the data, you know, like and how you can actually extract value, you can put into four different buckets. So there are descriptive analytics so you can take all of the data or you can take a sub-sample of the data. You can see, hey, how did something happen? You know, what happened? You know, so you can always see it. This is what typically you do in monitoring, isn't it? So you keep, you know, like monitoring something and say, hey, you know, like it's breaking something. And then there is a diagnostic analytics where you take the data and you try to create a hypothesis saying, why did it happen? You know, so you're trying to come up with, you know, like ways to find out why it happened. But actually compared to that, there are other analytics that can even provide more value, but they are very difficult, you know, so it's very difficult to do that kind of, you know, analysis. Those two are predictive analytics and prescriptly analytics. So in predictive analytics, you want to predict, you know, ahead of the data. So you don't have the data, but you take what you have today, but you want to be able to predict what's going to happen tomorrow. And that's the predictive analytics. And the prescriptly analytics is, it should be actually, sorry, it should be a prescriptly analytics. What prescriptly analytics is, you want to actually change the outcome. So you want to do some kind of scenario analysis and see how can I force that scenario if I know what the outcome is? So this is just, you know, like analyzing, you know, this is able to predict something, but then this is able to actually predict the input conditions that can, you know, force an output or, you know, like an outcome. So that's where there is a high value and that's where the high productivity is. And ironically, we're collecting a lot of data and 90% of the data is never analyzed because there are no good methods to do this kind of high value analytics in every scenario. So can AI get us there? You know, that's the fundamental question. So if you really see the two axes that I'm plotting is the volume velocity and the variety of the data. So there's a lot of data coming in. And you want to somehow create value from that. You know, so that's what I'm showing on the Y axis. You could do a lot of traditional programming, but the traditional programming, you know, especially if you take unstructured data, if you want to analyze images, if you want to analyze video, you know, like if you want to analyze all of those, it's very difficult to program everything in a traditional programming manner. But that traditional programming is mainly driven by humans, you know. So this is where humans are adequate. So to some extent we are good, you know. So that's how we've been dealing with computers, isn't it? So primarily to deal with, you know, like numeric data, you know, like tons of, tons of numeric data. And especially if you know what to program, if you can actually codify the logic into rules, this really works. But after then, you know, like you saturate, you know, like you cannot deliver more value with that. This is when then you move to a classical machine learning. You know, machine learning is a kind of, it's a broader umbrella. That's where the deep learning fits. But still in machine learning, it still needs some human input because you have to define the high level features and ask the machine to learn low level features in order to, you know, like make the estimation. So, but again, the classical machine learning, it doesn't scale very well. You know, at some point, you know, it also saturates. And then the deep learning that is based on the neural networks, the very deep neural network, that's when they can come and actually, you know, like help us with creating, you know, high value. So they can analyze video. They can analyze image. They can analyze text, you know, whatnot. And also, look at here, the human skills are not adequate. If I give you one petaflop of video, can we analyze it? You cannot. I mean, we still, we cannot, I mean, analyze that kind of, you know, like data that amounts. And also, we cannot look at patterns. You know, especially as the data is going, you know, like bigger and bigger, you got to, you know, look into 20 variables, 30 variables. So you cannot actually extract patterns there. So let AI handle this. So this is where the AI can help us in the digital age to get higher productivity, like how Steam Engine did this during the industrial, you know, revolution. So that digital age is really in need of AI to extract high value from data. So I just want to give you one quick example. You know, like how Lenovo is using machine learning, you know, to extract value. So we get lots of this social media push, you know, like, so somebody talks about our products, saying, hey, you know, this is good. This is bad. You know, I like this. I didn't like this. There are lots of images, you know, like videos and all like, and those kinds of data. But they're very difficult to analyze it. So this is where we use it, actually, machine learning to understand what customers are talking about our product, took that feedback, and actually build this yoga product, which is very successful. The Lenovo yoga brand is very successful. So that's what we created. So this is again where humans cannot actually take these hundreds and hundreds of hours of video or, you know, thousands of these images and can synthesize and come up with some kind of, you know, like, you know, very simple things that, you know, we would be able to understand, you know. That's what the computers and the AI is doing and helping us with. So another thing is, AI is going to impact every industry, nearly every industry, you know. So I'm just highlighting some examples here. So the investment industry, the healthcare, you know, like as I gave you an example, new automation, self-driving cars, you know, like marketing, oil and gas, manufacturing, security or defense, media. So there are lots of, you know, lots of, you know, like industries that can get impacted by AI. And again, there's always opportunities to leverage the artificial intelligence in the right way in all of these. So I'm just, you know, like kind of, you know, switching the topic and say, how do you build an AI system? If you want to really build an AI system, isn't that so? You have to store a lot of data. You know, the data might be coming from somewhere, you know, like the social media or, you know, from your customers and enterprises. So you collect all of that data in a big data system. This is where you store, you prepare the data and you feed the data. And then this is where the neural network actually gets trained, you know. So this is where you need a lot of computing power. That's why I was actually talking about the importance of the computing power. So it's even common to see a lot of these super computing centers are now moving towards AI direction, they're pivoting towards AI because they know how to do this problem. You know, like this high performance computing kind of problem. Once you train a neural network, then you want to deploy the neural network. So this is where you can deploy it in mobile phones or, you know, like smart, you know, like devices or, you know, like your PCs or, you know, like laptops, you know. So that's where you deploy this. And then, so for each of them, you need, like, you know, different kinds of, you know, like technologies. So for example, the big data systems are typically run by Intel Xeon processor and Intel is now planning to release this Xeon 5 which is ideal for machine learning. And also the Altera, the FPGAs and Intel Xeon for inference. So that's how you can actually build a system by taking these technologies that Intel is developing. And we do actually provide those systems to, you know, like AI customers or AI users. So our mission at Lenovo is to democratize AI. So how we are doing, we are actually taking this three prong strategy. So we are building an innovation center. The purpose of the innovation center is to demonstrate what AI could do, you know, like to you. You know, like, so this is where we want to take and, you know, like these image diagnosis kind of thing. You know, so we want to actually create some of those applications to demonstrate the value of AI. And we also want to create a development platform where our customers can immediately take it and immediately start building an application rather than worrying about what kind of hardware I need to bring, what kind of software I need to bring, and all other stuff. And then finally, the deployment. So this is where we are kind of planning to deliver scalable systems. Because in enterprise context, if you are to deal with, you know, catapflops of data and, you know, like you want an end-to-end solution, so this is where we want to deliver an end-to-end system. So that's what Lenovo is actually doing to democratize AI. But thank you all. Thank you for your attention. Appreciate your time. Any questions? So this is one I came across. But definitely, you know, there are other areas. One is, again, they're trying to, they're all around this image-based ones. Because if you see a lot of AI today, it's all around image-based AI. So they did this in healthcare. They're doing this in oil and gas. So in oil and gas, when they are doing this seismic processing, they collect a lot of data. You know, like, so they send these sonar signals and then they do the imaging, isn't it? So again, this is where there is somebody who is actually looking at these images and try to estimate the oil reserves there, what says AI is doing that. And there is, by looking at these kinds of examples, they're looking at, you know, like, can these be linked together? It's one. So the second one is, again, in healthcare, Mass General Hospitals is doing a lot of work. You know, so they have billions of images in their repositories. Again, they are actually training all these radiology images with AI systems. And the whole idea is, before even a radiologist come and see that, the AI is able to predict something. And then it's going to have the radiologist. So the radiologist doesn't have to spend 15 minutes there. So he can actually spend only five minutes to verify the diagnosis. And also, some of the information that's, again, giving is actually helping, you know. So it's very similar to what, you know, like the other example that I quoted, but it's kind of, but they're trying to bring that into production, you know. So the example that I gave is more kind of research. You know, it's a research paper from MIT and Harvard, but actually what Mass General is doing is trying to put this into production to bring it to patient care. But oil and gas is looking at this. Perhaps, you know, there are other examples, but there's something I could come across. Other than that, what are you going to do? So imaging is one, but I see a lot of analytics, you know, so a lot of big data analytics, isn't it? So today, a lot of those analytics are based on statistical packages. So this is where, again, they're moving not directly to deep learning, but actually machine learning at least. So the next steps seem to be machine learning. So imagine, before you want to, you want to give a loan to somebody, you know. Like, so the three or four parameters the financial companies look to underwrite your loan is, you know, basically your salary, your age, you know, like, and all of them. But now imagine if you want to look at 20 variables to get better accuracy, okay? The statistical base, you know, package they cannot correlate among 20 variables. So this is where you got to do this predictive analytics using machine learning. So what machine learning does is, it now takes all these 20 variables and try to correlate, you know, like basically that's what it is doing and underneath. So that means you are putting fine granularity into your data. So enterprises are very interested in doing that kind of work, you know. How can I apply the machine learning as a next step taking from the traditional statistical techniques? And the second trend is what I'm seeing is, you have typically the structured data that was living in databases. And then now we have this image data. Actually what a lot of companies are doing is they're trying to bring them together. So they're aggregating it, you know. So for example, you as a customer, you know. So you are in my database, you know, I'm an enterprise, you know, like I sold you something. But again, you may be posting something in the social, you know, like media. The one example I gave you is like, there's no linkage between, you know, like the customer and the data. We're just looking at the data in there. But now I want to actually really understand your preferences. So I want to treat you as a segment of one, basically. I want to see, you know, what you want to buy. I want to know what your preferences are. But this is where it's actually trying to bring these two together, you know. It's not just looking at the image data by itself, you know. So those are some of the emerging trends. That's where enterprise are finding value. Yeah, of course. So the two areas that we see in higher education is in two different areas. One is the academy is trying to push the neural network research itself, you know. So they want to actually do better algorithms. So out of those three, you know, like if you talk about computing power data and algorithmic advances, they're making a lot of algorithmic advances. So that's one area. The second area is a lot of enterprises, they cannot adapt to some of the methods because they don't know, you know, how to do this. This is where, again, the academy, you know, like is taking and trying to kind of show some way of doing it, isn't it? So they want to show, hey, this is possible. And then the next step is, you know, like for the enterprises to pick it up and see, you know, like how they can use the application. So those are the two areas, you know, algorithmic advances and application advances. So that's what the academy is doing. Any questions? Okay. So, well, thank you very much, you know, for listening to my talk and appreciate the time. I hope, you know, like you enjoy the rest of the day. Thank you.