 Hi guys, my name is Suchit Ligare and I'm excited to talk to you today about myths of artificial intelligence. Just for context, I am a senior product manager at Amazon and this presentation is based on essentially the myths that I've experienced in my career at Amazon and even prior as a product manager. So let's jump right into it. Before we get started, I would love to talk to you about my self and my journey here. So I have a background in tech and a degree in engineering. So I graduated from University of Virginia with electrical engineering and computer science and right after graduation I started working at a small startup called alum.com based in Washington DC area. So it was an early startup in the smart home space and this is before smart home was like much more popular. So started there. I was there for a total of five years. Initially started as a hardware developer and over the last few years or so got into product management. I ended up loving product management a lot. So I decided to quit my startup and went back to school to get my MBA from Georgetown University. So there I did a full time two year MBA program and pretty much focused on startups and entrepreneurship and product management. So after two years right after that I got a job as a product manager at Amazon. At that time Amazon was not even a profitable company so much more smaller but in kind of company out in the West Coast. So within Amazon I've been at Amazon for about five years now and I've had the opportunity to work on a lot of different products within a different teams. So in total I've kind of launched four different products and teams within Amazon. So one of my first products or teams in Amazon was a combination of Kindle and Alexa. So this is prior to Alexa being much more popular but essentially we built the first effort knowledge graph based on books knowledge of Kindle. So think of it as an algorithm that has read every single book out there ever published by Kindle and can, you know, answer books related questions from Alexa. So who is Harry Potter? What school did he go to? What's his wife name? What's his dog's name? So kind of extracting books with it information through, you know, through Kindle's books knowledge and surfacing to Alexa. So launch that product was hugely successful and then moved on to another team within Amazon called Silk. So Amazon Silk is a web browser, Amazon's own web browser for fire tablets and fire tea. So within Silk I was in charge of increasing the customer base and revenue for the browser business and worked on Silk on tablets and eventually launched Silk on Fire TV and helped kind of grow customer base there. Did that for a good two years and then joined Alexa Smart Homes team where I worked on Alexa sound detections. So there my team built machine learning algorithms to detect certain household sounds such as dog barking or baby crying or someone coughing or snoring. So essentially trained these algorithms to detect these sounds and we just recently launched that out. And very recently I started working on a new project within Amazon in the fintech field. So that's my journey. And yeah, let's let's let's get into it. So today's agenda. My agenda is very simple. I've identified five different myths, you know, just based on common kind of conceptions, and this is more like a mid buster episode for artificial intelligence. So the five myths are artificial intelligence is a new trend in the tech industry. Second, artificial intelligence or AI and machine learning or ML are interchangeable words. Third, AI is always smart and 100% objective. Fourth, ML is way too complicated and expensive for my project. And fifth, AI and ML are only applicable in the tech industry, not really applicable outside. So before we get into everything. I wanted to tell you guys why you guys should care. You know, typically AI and ML are more for used in the tech world so developers and data scientists kind of are much more closer to machine learning and artificial intelligence. So why you as product folks should care about artificial intelligence and machine learning. There are a few reasons. First one is just kind of misuse of buzzwords. I've heard that and seen that a lot where people, especially in product just don't understand much about artificial intelligence or machine learning, and they kind of just use, you know, these hot buzzwords like ML and AI in everyday kind of usage or in projects where you don't really necessarily need AI or ML. Secondly, there's a black box mentality of what machine learning or artificial intelligence does. And, you know, people who are not in the tech industry, for them usually it's something like you know AI or you know something like a box where you put in something and this algorithm will solve all your problems. And in reality it's not really like that. The purpose of this kind of webinar is for you to understand what can AI do and what cannot and that's really important because you don't want to use machine learning or artificial intelligence where you don't need it. And vice versa you want to use AI where it makes sense. To kind of illustrate this point a little bit more, got a very fictional kind of example here. If you are familiar with the show Silicon Valley, you will get this reference, but essentially the, you know, just let's say if you wanted to, you know, schedule a meeting with your friends or with your leaders. The guy on the left is saying, you know, hey, he can, you know, I can download every calendar, get all the data, build a machine learning model and figure out the right best time for everyone to meet and where. And the person on the right just says, well, you don't need to do get into all these complex details we can just meet on Saturday. Right. So this is an example where you don't necessarily need an ML and an over complex. So this is why you should care. And then jumping right into our first myth AI is a new trend in the industry. So this is definitely not true. And let me give you really quick history of the so artificial intelligence the term was coined in the 1950s. People, especially data scientists, computer scientists, anybody in the tech industry, they've been using this term artificial intelligence for half a century more than that now. And essentially technically AI consists of any logical software program that computer scientists can develop. So, you know, it's essentially something that can simulate human intelligence and that basically artificial intelligence. So what's new now, and why all of a sudden we've been hearing AI and another the past like just a few years right just five years or so. There's two reasons for that one being improved hardware. And what that what I mean is computers these days perform so many complex computations in in literally fractions of seconds so compared to just a decade back in like today's computer chips perform literally trillions and trillions of just computations and equate equation solving in a second. That's super important for artificial intelligence and machine learning algorithms. Secondly, if there's one thing to know about machine learning is it requires a lot of data like significant amount of data. So, what's changed now is, as you guys know, we have a lot more available data. So we have data on every topic you can imagine. And it's a lot more, you know, some of it is noisy data, some of it is structured data, but in general we have a lot of data available. And not only that, we have historical data going back to at least a decade where, because, you know, thanks to cheaper stories such as, you know, because of cheaper hard drive or solid state drives or cheaper cloud storage. People don't need to delete their data. So we have a lot of historical data, and we have more types of data today than just to take it back. So all of this makes artificial intelligence and machine learning much more feasible now. It's not a new trend. It's been there out there, except it would just take a long time for a machine learning algorithm to actually give you an output in the past. And it was just not practical. Now it's much more easy. And just to continue this, machine learning is, you know, it's a fancy word, but what it really means it's just an enhanced regression analysis, or another word of saying this, it's a pattern recognition. So, you know, you basically have a regression analysis model or like an equation that detects a pattern. And essentially all you're doing is getting it a training data set, you're learning or teaching the model, you're getting an output of the new data. You're validating that and then you kind of feeding it back into the algorithm as an input for next time. So this helps because essentially what's new now is now you have a lot of training data set, now you have a lot of data available. And now you can redo, rerun this cycle literally billion times or maybe more very quickly because of the computation problem. So these things make AI and machine learning seem like a new trend, but it's been there for a while, and it's just now a lot more feasible to do. So that's essentially myth one. Myth two, this is one of my favorites. AI and ML are interchangeable voice. And I've heard this a lot. It's especially true within product folks and leadership and even senior leadership. So I've heard this from, you know, directors and CEOs where they were interchangeably used words like AI and ML to mean the same things when they're not. So it's a lot more different. Let me go briefly through what the differences are. So artificial intelligence, as I mentioned before, is technically any sort of like an algorithm that a computer programmer can develop and write. So any sort of logic, any sort of, you know, computer software is considered artificial intelligence. So examples of that is any kind of computer software you've seen video games. These are all softwares that people can type in, you can build in this space of logic. And that's classified as artificial intelligence. Now machine learning is a subset of artificial intelligence where it's a type of AI that learns from past historical data, and it detects patterns based on the data. So that pattern detection is super useful because the output of machine learning algorithm is essentially you kind of extrapolate and build new data sets. So based on past data, you can figure out recommendations, you can figure out what, you know, you can do a lot of predictions prediction algorithms. So, and that's what machine learning means is you give it a bunch of input data set, you train a pattern recognition or detection algorithm, and you get an output that tells you how closely something will happen. What is the propensity or what is the probability that something new belongs to this pattern. So an example for this that everyone knows is, you know, is Netflix recommendation or, you know, Amazon recommendation, any sort of recommendations. Now there's another sort of third set of artificial intelligence called deep learning. And not a lot of people are aware of this yet. But essentially deep learning is a subset of even a machine learning. So it's a smaller part of machine learning where essentially the machine itself develops algorithm. So we don't have to, we don't have to code the pattern recognition algorithm. The machine itself figures out what are the right ways to detect a pattern. And it's also called artificial neural network or ANN. But essentially you see this a lot in self driving cars. So this is all well and good. But what does this really mean in practical life. I want to walk you guys through an example of facial recognition software. So let's say you are a product manager or a product person in a company and you guys, your team needs to develop a facial recognition software, right. So let's go through the three different ways of how to do it. Let's say with artificial intelligence, the way to do this is to develop high heuristics to detect facial features, right. So you can tell a programmer that, you know, hey, let's build some sort of software that can detect a circular, you know, typically a circular face. It should have two dark pupils. It's in the top one third. It should have a hole in the for that for a nose in the middle of the circle and towards the bottom one third you have an oval mouth. And, you know, based on these heuristics, you can get a pretty decent facial recognition software. And this is how it worked up until the 2010s, you know, and this can get these heuristics can get super complex. So the more complex you heuristics to develop the more accurate your algorithm is. And that's essentially artificial intentions. Now let's say you want to do this in a machine learning way. The way to do this. So first of all, this requires a lot more structure data. By structure data, what I mean is you need a set of data to train the model. So you have your machine learning algorithm. And you need to feed in, say a million images of faces of what faces look like, and also a million images of non faces or what it would not look like. And essentially, both these sets are important because faces tells the algorithm of what it's looking for and non faces. Let's correct when it's wrong. So based on this, you kind of the algorithm kind of feeds the output within itself and learns more front. And essentially, then the machine learning algorithm, you know, that's how you would develop a facial recognition software is you take an ML algorithm, you feed it data. And the output is that you can detect faces or not faces to a level of accuracy. Now, let's say we have to do this through deep learning. So this requires a lot more data, but it doesn't have to be structure. So typically it requires 100 times more data. So we're talking about 50 million or 100 million images. They don't have to be labeled as faces or not faces. You feed in all of these images to a deep learning model, and the algorithm kind of comes up with it with its own layers, such as does it have eyes, does it have ears, does it have one nose, one mouth, things like that. And all you're doing is you're looking at the output and figuring out if that's accurate or not. So that's how, you know, a deep learning algorithm works. This is over. This is a very, very high level overview of what artificial intentions mean, how what machine learning means and what deep learning is. I would highly recommend you to like, you know, go look into how these work and get a little more information. But at the end of the day, AI is not the same as ML and definitely not the same as deep learning. And as product folks, you should know when to use those terms and when not. AI is always smart and 100% objective. So I've heard this a lot. Again, among people who are who are not on non developers and non tech sites or product folks who usually will just take the output of AI or machine learning algorithm as the source of truth. And what that may be true for most cases, it's not always accurate. Essentially, you know, then an algorithm is just as good as the data we feed it. So if you feed it bad data set or if you feed it noisy data, the output you're going to get is not that accurate. Or better or a lot of times ML algorithms have subconscious bias. And contrary to what people can say a data can lie and machine learning algorithms will amplify the bias. So I've got two examples here to kind of illustrate these points. The first is a very kind of a very well known example of bias against diversity where a company, a big company had this really genius idea of using a machine learning algorithm to read out to go through all the resumes that get, you know, sent to their company and select the best applicants. So this is a great idea that developed a machine learning algorithm. The problem was they used the data they used to train this algorithm was from their own employees. They use the employees that they currently have, they use their resumes and they train the algorithm based on these resumes. As it turns out, most of the people in their company were kind of they were not diverse companies. So they were kind of, you know, they had a typical kind of applicant who or an employee who worked there, you know, same kind of non diverse, one kind of gender, not too much, you know, they all kind of had very similar kind of like they went to all like bigger universities, Ivy League and things like that. So what happened is the machine learning algorithm did its job and learn from these resumes. And it learned that candidates who have exactly these categories, other ones we require, and the other ones we don't need. And the machine learning algorithm started discriminating against diverse applicants outside of that typical, you know, the characteristics. So that essentially is a subconscious bias and it's definitely amplified in the use case. Similar example, where where you see a bias against countries where let's say your product manager or product project manager for a global company that does business all over the world, but just happens that your biggest customer base is in US. So now let's say you want to build a build a machine learning algorithm to understand like what, what kind of products are going to be trending next year based on the data. Because your, because most of your users are based in US, the machine learning algorithm will learn that that customers or users based in US are much more favorable than non US users. And it's going to predict, you know, it's a trend based on US kind of data, and it kind of discriminates against other kind of countries so there's a bias there. And you as product people have to understand when is, when will that bias coming play and when to, when will it not and how are you, you know, tweet that, how do you make sure that this bias doesn't show up. Myth four, machine learning is way too complicated and expensive for my team and project. So this is an interesting myth. It's kind of true, but not because the machine learning algorithm is is is difficult. I think people think that to build a machine learning algorithm you have to hire data scientists or specialists that will actually develop this machine learning algorithm, and you know they will kind of build this from scratch. In majority of the cases, I want to say many five plus percent cases. Nobody actually develops the actual machine out. Most of the times people just use open source libraries, such as TensorFlow or Python libraries, or any of the, you know, softwares or tools available to AWS or Google Cloud Platform or oratures. So the actual machine learning algorithm is pretty easy, you just use one of the existing algorithms. The hard part in this is where you come in as a product person is to figure out, where do you get the data from? How do you make sure like how much data do you need? And how do you get that data? So how do you make sure that the data is is not biased? How do you make sure it's not noisy? How do you make sure that the data is like it's a clean data set? And on the output side, you have to make sure that the output it's giving you is accurate too. That's a weird question, too. What kind of accuracy are you looking for? So it's impossible to get a machine learning algorithm that predicts 100% of the time, right? So what is the right percentage of accuracy? Is 90% good? Is 99% good? What about 60%, 70%? And obviously the answer depends on application to application. Some applications require a high level of accuracy, and others don't. And the more accurate you want, the more data you need to feed. So, you know, while working on, in my personal example, when working on something like a machine learning algorithm that detects a security fraud, we want it to be super highly accurate. So we need a ton of data set to make sure that the machine learning algorithm is trained properly. Whereas sometimes others where, you know, there's a lot of other kind of applications where you're just trying to recommend a big genre. You don't need too much accurate information there. You know, you may be okay with 60%, 70%, 80% accuracy, and you don't need that much data set. So depending on the application, you as a product person have to figure out what is the right level of accuracy, your application requires. And so that's the hard part. So dealing with data, getting that data set, cleaning it, making sure it's not bias. You have to think about is the machine learning output, is that ethical? There's, you know, ethics in AI as well. And so those are the hard parts. So ML is not complicated for your product because of the complexity of the algorithm itself. It's essentially difficult because you as a product manager have to figure out is there a data set you can get or purchase to make this feasible. And that becomes really difficult. And then the last myth I have is, I've heard this a lot again, which is AI and ML are only applicable in the tech industry. This is 100% not true. And I think traditionally has been kind of affiliated with the tech industry, but you know, again in the past like three or four years, I'm here to tell you that every single industry uses AI in some form or other. So I worked at Amazon and Amazon in so many different industries, and there's rarely a team at Amazon that does not use machine learning algorithms. So it's very common. And here I've put together a few examples of different industries and examples of what they use machine learning for. So for example, in finance, finance is one of the first industries that accepted machine learning algorithms because of the amount of data and the level of accuracy they require. So as well as the stock market, there's a lot of like prediction algorithms that use machine learning. So in finance, some of the examples are, as I mentioned, stock market predictions, robo investments, which is a very kind of recent product and service. A lot of companies are offering fraud detection is a big kind of use case risk assessment and management. That's a big use case. In the marketing industry or ads industry, you use machine learning algorithms to understand sentiment analysis for from customers, you know, customers feedback. You used to detect knockoff products, so like, you know, products that are not, you know, that just came there from their night keys or Louis Vuitton, but they're not really CRM intelligence, or even like to understand fake false reviews and remove those, or inappropriate reviews, things like that. In the transportation industry, everybody knows about self driving cars. They are one of the big use cases of MLM, specifically deep learning algorithms. In healthcare, this has become increasingly more and more useful in healthcare where they use ML algorithms and deep learning algorithms to identify diseases, disease prevention, disease prediction, drug discovery, things like that. We are in a COVID world and more and more use cases has come up for healthcare and machine learning. Real estate. So, you know, this is one of the large industries people think ML and AI is used, but it's used very, very commonly to predict real estate property market values and prices, and to understand property recommendations based on the likes and dislikes. Again, because of the lot of data and a lot of paperwork there, within the law industry, AI is used to identify legal issues of any contracts, instead of going through it manually. So that's a very big use case. And essentially, the idea here is, whichever industry you are in, chances are AI is already there, machine learning is already there. And this is, you know, this is, you have to know about it on a baseline. You can't go into the industry and be a successful product manager without understanding the basics. So artificial envisions, machine learning and deep learning. And here's some more of a few examples of what are, you know, in some certain cases, ML is smarter or even more accurate than humans. So a few example here are in the medical diagnosis fields, AI systems can correctly detect a cancer disease, any 7% of the time, compared to 86% by healthcare professionals. And more and more AI is used to replace diagnosis field. Similarly, in the law field, AI is 10 times more accurate and faster than top human lawyers. That's really great. In, you guys must have seen Google's duplex AI demo where this service is making phone calls to set up appointments with humans, without the humans knowing that this was an AI, so becoming that much more accurate. And lastly, I want to leave you here with AI in the field of art, not just science and math, but in the art where this painting on the right, called Optimistic Sky, is one of the first paintings that is fully developed by an algorithm. So this kind of brings up very interesting use cases and questions such as, you know, what is the future of painting in art, if AI can develop these accurate kind of techniques. So essentially, anywhere you look at every industry, AI is there and it's growing much more faster and kind of just sort of doubles down on the point that you have to learn and know about AI and ML and use it the right way. So, essentially that goes for my five kind of takeaways and myths that wanted to kind of burst or, you know, confirm the flesh when being artificial intelligence is a new trend in the industry, which we saw it's not really, and it's just sort of a new, newly, you know, it's been popular recently because of technological advances in data and in hardware. Secondly, AI and machine learning are interchangeable words, which we looked at they're much more different and ML is a subset of AI. Thirdly, AI is always smart and 100% objective. Again, that's not true because it can be biased, it can be, you know, non accurate or not objective based on the training it goes through and what kind of data you feed it. Fourth, ML is too complicated and expensive for your project. Well, again, that's not always true. It is complicated because of the data you need for it and the amount of data for it. But not necessarily from the purely algorithmic perspective, the algorithm you can get it for free from any of the open source of the libraries. And lastly, AI and ML are applicable only in the tech industry. Well, it's not and almost every company, regardless of industry, I guarantee you is using AI and ML in almost every team. So the more you learn about AI, the more advantage you get once you get into field and, you know, become a successful product. So that's essentially my webinar. And just want to thank you for listening through it. Again, this is a very high level overview of what is AI, what is ML and what is deep learning. I highly recommend you to, you know, learn more about this on, you know, multiple different platforms, and just become familiar to the point that you know that you get to talk to people about it. Feel free to reach me out on my LinkedIn. Thank you all.