 So to talk about advances in AI and how we'll see this impact in enterprise, let's start first with where AI has started itself. And so, you know, let's go back a decade ago into, you know, this thing called TikTok, which, you know, of course, most recently we hear of it as a current as an app. Its name comes from Frank Baum's visit to pause series, in fact, Osmo pause back in 1907. And, in fact, the name term robot came from how you are 1920 play back in, you know, Russia. Moving ahead a little bit, you know, talking about AI, of course, the tuning tests come to mind. It was originally called an imitation game where, you know, we were trying to compare, look at two different people and identify who is who. In this case, it then it got morphed into, could you have a computer and a person where it was hard for a person see to actually say who between those two is the computer and who is the real person. Moving ahead, you know, many of you might be familiar with the movement. You know, and, you know, this is just Alex Garland's take on the tuning test itself. And we see changes like this happening over time, of course. Continuing ahead in the 1950s, this is where a lot of progress happened in AI, especially in terms of, you know, how people were thinking of it conceptually. You know, the Dartmouth conference was very influential, driven by Marvin Minsky and John McCarthy. It set this direction of where I was going for the next few decades. In fact, the term AI itself was coined for this conference by John McCarthy. Move ahead a decade and, you know, there was this chat part Eliza here, of course, you know, familiar with lots of chat parts. Now this was one of the first ones created was very early conversational system, but impressed lots of folks. It was really meant to be very superficial by Dr. Weissenbaum when he created this, but it ended up feeling very real, you know, especially the doctor's script that you see here where people, you know, it's posed as a psychotherapist. The name itself came from Eliza Doolittle in GB Shaw's play, Big Million, that was made into My Fair Lady in Italy around just around the same time. So that's a clip from My Fair Lady in the movie. Move ahead again, you know, the end after the dendritic algorithm. The earliest example of force-cox-perth system. This automated the behavior of organic ants. The idea was that expert systems that an expert created a knowledge base that can be leveraged by an inference engine. Typically a simple rules engine that could answer user queries. The work on Dendral and MySyn enshrined expert systems at the leading area of AI research over the next couple of decades. So let's move ahead, you know, moving from their much closer a decade ago, last decade, and we see things changing a fair bit. You know, while the core idea of neural network states back to the 1940s, back in the 60s, Minsky and Papertrode a book on connectionism in 1969. And they critique this area and that really pushed us to the background while expert systems gained prominence. It wasn't until the 80s with new ideas from John Hubfield, David Rumelhardt that led to its resurgence. Of course, it still took a few more decades to really come to the forefront as what it's known now often deep learning. And the last decade has really seen a huge uptake and changes in what's happened here. So let's look at what it's changed and let's look at what it's brought for us over these years. So the first part of course is, you know, it's really learned to help us learn to see. And if you think about back in 2011, we had, you know, if you had these models that were trying to see and understand what's then in a picture, there were still huge errors, you know, 26% errors versus humans only have percent errors for those kind of images. And, you know, that's sort of like seeing this blurry image, but we've gone from that to this where we can actually see this very, very clearly. And, you know, going from that to this crazy change, if you remember in the evolutionary biology, the time when animals first evolved, it was a time of great change. We saw huge evolution with lots and lots of new species being created, having a dramatic effect. And that's the same kind of thing that we're seeing in machines in so many different ways, what we can accomplish. It's just very, very exciting. Let me just give you one example of, you know, what it's accomplished, of course, in terms of real products, what it's allowed us to do. You know, Google photo photos comes up as a very common example there. You know, now, there used to be a time where we had to label all of them ourselves. Now, all of that picture, you take a picture, it figures out what's in there, you can do searches for dogs. In this case, you can recognize glaciers, you can recognize all kinds of things. And, you know, then the next part for us, of course, as a sensory thing is just learning to hear. And this, again, is something that's made a lot of progress on deep learning is really, really helped. In fact, one year in 2012, the move to deep learning from the prior techniques was almost like an improvement of 20 years of research in this area. I'm talking to researchers in that field. And that that's, of course, led to things like us being able to call or ask instead of having to type everything. And in the flip side of it, in terms of, you know, the speech generation part and understanding, we can do captioning, etc. For example, in this case, you might be able to see this, you know, on your phone, you have videos that you can't hear your heart of hearing, or you're in a place where you can't have the audio on, you can see this in a caption very, very easily as well. And this is happening right on the phone in this particular example. Another example is, you know, about understanding language and, you know, going from the vision to just converting speech to language itself. Of course, language plays a key role in us and in what we do. And again, over the last couple of years, we are seeing. And over the last few years, we've seen huge progress in, you know, what we see with the reading comprehension and other techniques like this. Most recently, of course, there's a lot of talk about this model called GPT-3 that can do all kinds of things in terms of conversational stuff, etc. This particular data set comes from Stanford. There's a question answering data set, basically, where the human performance as you could see is listed here. It started in 2018. And just in a couple of years, we've seen things progress from going past human performance to way better than that. So, you know, what this says basically is, if I had to take a test there with which needed reading comprehension, probably, you know, at least as well, or perhaps better. Of course, the tests themselves aren't going to be changing anytime soon and we'll still be taking them. But this just shows how far we've come with computers with AI in lots of different ways. Of course, it has very real implications and some exciting things from respective search, of course. If you look at Google in this example, pre-Bert and after-Bert, this is a model that really changed the game in terms of language understanding. Before we had this model and you can see that it matches the keywords but doesn't quite get the context fully. Whereas after it really gets the entire sentence and can understand what's going on. Of course, it's applicable in lots of areas. One of the areas that we see it being applied very much now is the assistance both Alexa or Amazon Echoes or the Google Assistant in Google Homes. And this is an area that we expect to continue to rapidly progress because of advances in many of these different areas of speech, you know, natural language and all of these together. Now moving on, so one of the questions that comes up is we've talked about this history. Why did this happen now? Why did we see these huge changes, huge improvements over the last decade especially? And if you, let's take a brief look at what's been happening and so on. And, you know, if you're not familiar with these three folks, they are basically Yanlacun, Geoffrey Hinton and Joshua Benjiro. They won the Turing Award in 2018 for their work on deep learning. Each of them, you know, were responsible for laying the foundations for all the improvements in algorithms that lecturers, surgeons and growth of neural networks, especially over the last decade, but they've been laying these foundations for the last several decades for us to bring to get to this place. So algorithms is one area. You know, we were seeing those examples about images, the 26% going to 3%. A lot of that came from those numbers themselves were from this data set, our test on this data set. This data set had a million images, still has a million images, I guess, from 1000 different kinds of classes. And it allows people, it allowed people to really iterate on things. And it... So data really definitely has been part of that. In fact, so much so that, you know, we have cartons like this, of course, that talk about, you know, we've been trying to use data for every single thing. Of course, I hope we aren't going to be trying to do this for self-driving cars, but there are lots and lots of things that we are seeing. People generate this data, create these data sets because they make a huge difference in how we can move things forward. Now, the third piece that is as important as the computation, that increased over time. Of course, there was the Moore's law that helped. We have a problem with the connection and we lost your PowerPoint. Could you share it again in Zoom? It's just sharing the presentation. Sure. Let's try to check if it's working here because we don't receive the PowerPoint. We don't receive the slides. It's okay now. Okay, it's working now. And do I continue from here? Would you like me to go back a little? No, no, no, it's fine, it's fine, it's fine. You can continue. Okay, all right, awesome. So, you know, the next piece here, of course, is the computational power. And that is an important one where we see this Moore's law has helped us really improve things over the last several decades. That itself is flowing now, but we've seen lots of specialization for deep learning itself. We saw lots of GPUs and TPUs that have really made things different. And in this example, you'll see a TPU part that's over 100 petaflops. This is as fast as, you know, some of the supercomputers in the top 10 in the world. And with 32 terabytes of memory and custom network itself, it's allowed the researchers to really push the state of the art to allow us to do lots of exciting things and to see what's possible. And so, you know, we talked about compute, really, there are things that you got from the data and the computer that have really come together. Of course, they've been further enabled by tools like TensorFlow that make it possible for developers and researchers to really do some amazing things along to leverage these different pieces. So let's take a look at some of the products that we've seen these play out in and really make a difference in. And here, you know, these are, you know, both on the translate side. So in this case, in both these, this is running. This is pretty amazing. You can go from that crazy supercomputer all over the form on the phone. They're translating whatever's there on that screen. I can point the phone at that in life. I understand what text is in there, converts it to the right language that I care about. And in fact, this plays it there as well. In the next example there, you see it's overlaid on any app that that you have text on it can just convert things right away right there. And, you know, here's another example that's affordable that was launched a couple of years ago. In this case, it's recognizing when everybody is actually, you know, it can take a picture automatically when everybody's smiling, everybody's eyes are open. So it's not, you don't have to wait and you don't have to judge and get the right thing. It's just going to figure that out. Here's, you know, some of the hardware that like Google did last year and, you know, really every one of them in many different ways. The systems, of course, for voice, for speech understanding, the phones for their camera for all kinds of apps. And we are being used in all of these in many, many, many different ways. Moving on to health care, which is of course something near India to us in lots of ways. You know, one of the things that was applied, one of the early applications in health care was around detecting what are the most common diseases of the eye. In this case, you know, these images are of the back of the fundal images. And there's this disease called diabetic retinopathy. The one on the right is slightly diseased. And it's hard to, you know, for a doctor, of course, they could look at it. They know these dots, watches are there and they can help recognize that. If this is recognized early enough, then it can be cured very, very easily. However, if left unchecked, then it can actually lead to blindness as well. And given that there are so many different places in the world, access to the same. And bring that same expertise and to many different parts of the world as well. You know, something that's happening right now, of course, with COVID-19. There are lots of, there's lots of research happening. And there are lots of companies that are helping both on the identification side. In this case, from X-rays being able to identify that there's a high risk of COVID-19. But there's lots of other work happening also in terms of the vaccine finding and building new medicine in it as well. So we talked about a lot of these different things. Let's look at how we can think about in the enterprise itself and what would work well. Is there something missing really? If you think about going back to this particular slide, all these pieces, does this work in the enterprise as well? Or are there things that are missing and we really need to change things? So let's take a look at each of them one by one. The first one, of course, TensorFlow, it's an open source tool. It's available to everybody. So it's really possible for anybody, whether it's a researcher, a big company or an enterprise to pick it up and start using it. And we're seeing lots of that. Algorithms themselves, they also are available along with tools like TensorFlow. Most people, when the most researchers now when they publish or they talk about their algorithms, publish their papers, also publish the code or other people actually implement those papers again. So they're also available in the open source for lots of people to use. Compute itself, of course, not everybody has access to the same kind of supercomputer, but you can rent it out. So with public clouds like AWS, Google or Azure, you really have access. Anybody out there now has access to the same kind of computer that's available to these big companies. And they can do some interesting and exciting things with those. Now, data, that's the next thing that we think about. And so we do have lots of data. In fact, with big things, there's probably lots of other talks about how people are collecting the data and how people can leverage this more effectively as well. Even though sometimes it might feel like oh, there's just too much data and how do we manage it? That has made a lot of progress over the last decade as well. And so there are lots of tools around managing data lakes, managing data warehouses. There's lots of ways that we can manage and organize the data that we have and keep it clean. And so we have all the pieces really coming together in terms of that we see here in terms of making AI available and enterprise making it useful. Is there something else we need to leverage here? Well, yes, we do need people. Not everyone can use to put all this together. There just aren't enough data scientists to build everything we would like to do. There's, of course, lots of work going on to make these simpler. We have tools like the ones that you see here, which leverage auto ML techniques from lots of different vendors. They definitely help. However, there's lots more work required to make machine learning work end-to-end. Of course, to understand how this may work best, let's take a look again at where the tech companies have been applying that. It's in their products, which is where they shine the most. For Google, whether it's in the hardware products shown here or Search, YouTube, Play Store, it's all about how can AI make more products more useful. It's really more different than the rest of the business. It comes down to what's the core competency for the business itself. For MasterCard, fraud is a key part of their business and something that customers rely on them for. Nike's marketing prowess is something it really focuses on and continues to leverage AI to improve its direct-to-consumer outreach. So when we think about AI and how we're going to leverage it, what we're going to do about it in the enterprise, we need to think about the same vendors' buy decisions that we used to think about for software, for example. For core areas, building the competency in-house is extremely important. For example, for MasterCard, it needs to build and own the best fraud detection systems. However, for many of the other areas, it can buy pre-build solutions. Example, for marketing, it may be able to rely on tools from other vendors. Whereas in the case of Nike, their core competency is marketing. Hence, they should double down on the internal efforts, even with the next order. This is the same when it comes to AI. It is important to leverage the best technology for the core competency, not to outsource it. Over the last few years, we've seen a lot of the B2C products start to integrate AI to make them better, and we saw some of the examples prior and earlier in the stock itself. And this is because of a number of reasons. So, let's start with the first one. Data, of course. B2C companies often have a lot more data because there are a lot of users they have a lot more data to start with. The good thing about image data sets that we were talking about, some examples that we saw, especially for B2C companies, is that learning is very easily transformed from one to another. For example, modern trained by top companies like Google or Facebook, can be leveraged by many other companies and they can kind of work over and over again. And the same kind of thing is true for language as well. English or Spanish are really the same. Whoever speaks it, whatever you want to do with it. This is an example of a consumer product, photos management software, going through some iterations. The CASA started as a desktop application. While it moved to cloud, Flickr did much better in that space. It did build something ground up there. Of course, over the last few years, we've seen Google photos become very popular, primarily by starting with leveraging machine learning for auto labeling and showing these results to you and it makes a huge difference in how we interact with the application. Looking at the business to business side of things, of course, I believe we'll see the same kind of thing with B2B products over the decades. Similar to what we saw in the move from the on-prem offerings to software as a service offerings. In fact, given it's a lot easier to build an integrated AI in the cloud, we see it showing up much quicker in these software as a service products. You'd also see a lot more technology started rebuilding some of these applications from the ground up to leverage AI in new and interesting ways. Let's take a couple of examples. On the HR software side, we saw decades ago, PeopleSoft was sort of the software that you installed, that you ran in stuff. Over time, again, let's try to move to the cloud off of the cloud service, et cetera. Workday came up and really leveraged that move to the cloud itself. More recently, now, we're seeing companies like it for disrupt that space in very different ways, leveraging AI in rethinking what can be offered in that space itself. Another example is CRM. Siebel is, of course, very well known for CRM now. In fact, both PeopleSoft and Siebel are owned by Oracle, but the CRM space was disrupted by Salesforce, which came in as a cloud provider, which came in with these offerings directly in the cloud itself and were able to offer this. And so, you know, look forward to seeing what's next there, how do people disrupt that, leveraging AI from the ground up. We expect to see a lot more of this in lots of different ways in many of these areas as well. I would love to leave you with something here. While it's tempting to expect AI to solve all our problems, I want to leave you with this last message on this. AI makes computers a lot like us. A lot more like us in movies than we like. While you would want it to solve our problems, there are lots of other interesting ones that come with it. That's it. Thank you.