 Hi everyone. Welcome to this webinar, which is part two of the two-part webinar series for AI in product. My first webinar was all about how to use AI to build a product roadmap. And this webinar is going to be all about how to think of adding AI features to your product or how to actually build AI product. Now, before we dive in a little bit about me, I recently graduated with an MBA from Stanford. And I'm currently working as a senior product manager at PayPal. Before this, I was a product manager at Oracle. And before that, in fact, even before starting my MBA, I was an Android app developer. Today we are going to be covering four things. The first topic is going to be to define what a product is. Now, I know that a lot of you are product managers and maybe this is one point that you know inside out, but I just want to make sure that you're on the same page. So the first topic that I'm going to cover is basically the definition of product. Then number second, I'm also going to talk about value-driven AI integration. Now, this is going to be more about why AI should not just be added for the sake of having AI in your product. And I'm going to talk about why it's super important to actually make it user-driven and user-centric when you're adding AI to your product. Then topic number three is going to be to a certain technical feasibility. Now, if you have decided to add AI to your product, what exactly you should be thinking of or checklist you should be following, what you should be making sure to believe that you can actually add that AI capability. And then the last topic that you're going to cover is going to be ethics in AI and mitigating any potential biases. A lot of times your algorithm itself might become biased because of data itself. So if that happens, how to mitigate it? So the first question is, what exactly is the product? Now, a lot of times we define and also use different products. In fact, right now I'm giving this presentation on a laptop, which is no doubt it's a product. We all use our mobile phones. Those are products as well. Now, how do you define a product? In my own opinion, a product is anything that solves the problem. In fact, think of a ladder, a contraption made of wood and rope. When you pay for a ladder, what exactly are you paying for? Are you paying for just a contraption made of wood and rope? Or are you paying for access to your room? Think about it. This actually, this example is from the jobs to be done framework. Which actually defines any product as something that does some job for you. Similarly, think of a drill. When you pay for a drill, what exactly are you paying for? You're probably not paying for something made up steel and electronics. You're probably paying for a hole in your wall so that you can hang the TV or painting on the wall. That's what you're paying for. A product is something that serves a job in our lives. This is basically what the jobs to be done framework is. The reason why I define what a product is, is because during the rest of this presentation, this context of an AI product actually needing to solve a problem needs to be kept in mind. That's why I've gone ahead and defined what a product is. Now diving deeper into our presentation. The first topic after we have defined what a product is, is basically to look at what does value-driven AI integration mean? When I say this, all I mean is that an AI feature should be added based on its potential to solve the problem and not just for its innovative nature. For example, don't just add an AI feature just because it's AI. Think of the use case that it needs to solve. Based on that, of course, decide the right feature. An AI feature, of course, that does not resonate with the user's need. An AI feature that does not solve what the user needs. It's going to be a flop. It's not going to fly. It's also probably going to confuse them. A lot of times, a lure of adding an AI feature to your product, just because AI is a buzzword, because of November 2022, we even had chat GPT and generative AI become the buzzword as well. Just because AI is the buzzword, it can be extremely tempting to add an AI feature to your product. In fact, a valuable product is a product that solves an actual user problem with or without AI. In fact, when you are asserting whether a product is a valuable product or not, one of the main things you should be asking for is, is this problem big enough that the users would actually want to seek out your product to solve it? Is it actually solving a big enough problem and is it solving it properly? You need to take a look at it regardless of whether your product is an AI product or not. Now, a few great examples of value-driven AI features can be actually seen in different websites as chatbots. Many websites actually implement such bots which are designed to help the customers to get their questions answered or sometimes even complete a few tasks. Now, when chatbots, when developed really nicely, can actually be a huge value add, but if the chatbot has not been designed and programmed properly, it can also be a value-negative feature. Similarly, Spotify is Discover Weekly. It can, using AI, it makes recommendations to you, works songs, but a lot of times you don't even need to work really hard to discover new songs. Discover Weekly is essentially ingesting a lot of your usage data on Spotify and based on how you use Spotify already, it makes more recommendations. So that's a great example of a useful AI feature. Similarly, Apple's Face Hiding. The reason why I really love this is because when I use Face ID on my own phone, when I'm holding it, my experience is completely seamless and then I don't need to type a password to log in. It's almost like I don't even need a password when I'm the one holding the phone, but then if somebody else is holding the phone, then they do need the password to open up my own phone. That's an awesome experience. So in this case, AI can actually deliver a really good user experience. Now, having looked at these three value-driven AI features, let's now think of some examples of AI features that may not be completely value-driven. So we are going to look at examples. And these examples are going to be hypothetical, obviously. So imagine a hypothetical home security system. They introduced an AI-powered system to detect suspicious movement around the house. So this feature on the face of it sounds really nice. Like it's using AI to basically not only just detect any kind of a change in the room, but it also classifies any movement as either suspicious or non-suspicious. Imagine how useful it can be if it works properly. It's really a typical classification problem. It can be solved using a neural network, but not everything goes well. Because this AI company has actually not done proper user testing. They have not tested for the product market fit. And they've also not completely ensured that their AI system is robust. So as a result, what happens is this system frequently raises false alarm. And then it's inside the house, but the moment it sees a car passing by, it starts ringing. So that means that it has too many false alarms which actually leads to desensitization. Now, just like the boy who called Wolf, now when the system, even if it's ringing for a genuine reason, you would actually feel as if it's not ringing for a genuine reason. So you would actually get desensitized. And that's basically a negative value add in this case because of the AI feature. And that is what happens if the AI feature is not value driven. As we can see, introducing AI just for the sake of innovation can heavily backfire. Now, let's look at another example. This time we are looking at a fitness app. This application is all about counting the number of steps you're taking each day. It also keeps track of your heart rate. It's completely hypothetical, but again, it's really similar to a lot of apps out there. So one of the product managers working for this application, he sees a trend of AI-driven health recommendations and the team immediately starts working on an AI-driven daily exercise routine. So it would actually use AI to recommend exercises for you every morning, every day. But once again, not everything goes correctly. So in case of this application, the users were more concerned about inaccurate step counting. So a lot of times this application did not even get its basics right. It did not count the number of steps, the basic thing that it was supposed to be doing. It did not even do that properly. And they were also concerned about thinking issues. So this app would not think up with their Apple Watch or with their smartwatches. And that was a huge issue for a lot of users. But the team overlooked all of this and they straight up went to adding an AI feature. And the result of that was because this app was making mistakes in terms of counting steps, now whatever recommendation it was making in terms of exercises using AI, even that they mistrusted. Because if the fundamental feature of collecting data on the number of steps you're taking is wrong, then as a user it's really hard to trust the app. So in this case, this feature did not really serve as much of a value line. In this case, fundamental necessities were overlooked in pursuit of AI. In fact, this company should have focused on the basics. They should have focused on getting their usual systems right before even thinking of adding an AI feature. And the moral of the story is that blindly following a trend, blindly following an AI trend, can actually take away resources from truly important features. And that should be kept in mind when you're thinking of adding any kind of an AI feature. Now when thinking of adding an AI feature, I would actually say that there are three beginning steps. You have to take these three steps even before you actually start thinking of adding an AI feature. The first one is obviously as a product manager, whenever thinking of any new product, regardless of whether it's AI or not, is to define pain points. So of course you would be conducting user interviews, you would be doing root cause analysis, you would be doing some other stakeholder mapping, et cetera, to really identify the pain points. Now that is a fundamental part of any product design and product, any design thinking process as well. So this has to be done. But then the next step is to think of how a user, how a human being would solve this problem. We'll actually see a few examples showing this later on in this presentation. But yes, the second thing, the second question that you should be asking yourself is, number one, what is the problem? And then number two, how would a human being solve it? And then the last question that you should be asking for is, does AI help scale the solution up? Now when I say scale up the solution, I'm not just talking about increasing the number, increasing the size of the solution or increasing the number of people that solution is solving that problem for. That's there. I mean sure, I'm definitely including that in my umbrella. But I'm also talking about the quality. AI actually solved that problem better than a human being. So you need to ask this question to yourself. And only if you have essentially a yes to all of these questions should you be thinking of adding any feature to your product. For example, let's say that we have a few restaurant owners. And again, all of these people are hypothetical. None of them are real, but they are based on very real experiences. So the first persona is Jessica. She is a fusion food innovator. What she's really unsure of is whether her cuisine would actually be well received or not. She actually wants to understand what kind of marketing or positioning she should be doing. To actually take her cuisine to market. She is a new restaurant owner. And she's also unsure of the competitive landscape around her. Imagine that she lives in San Francisco and San Francisco to be honest, to be honest, the whole barrier itself has like a really good food scene. So and that's why she's unsure of the competitive landscape as well. Similarly, we have Raj, who is an owner of a family owned Indian eatery. Now in his restaurant and he's new, so he has recently started his restaurant. He wants to preserve authenticity of the Indian recipes that he has been learning from his mom. And he actually struggles. He actually struggles between tradition and modernity. As in, should I actually go for a traditional dish that might be a bit spicy? Should I modernize it adding avocado into butter chicken? Now that may not sound the best, but Raj is confused about problems like these. Like what should he do in terms of going for a traditional route versus a modern recipe? And he actually wants long-term customers. He actually wants people who would actually become his regular patrons. Restaurant owner number three. These are actually two people, Emily and Jake, their partners and their health focus and they own a cafe as well. Now they want to actually, so in their cafe, they want to offer different kinds of healthy choices, but they are unsure if the customers would be willing to pay a premium for health or not. And a lot of times they also find themselves wondering about their engagement strategies for engaging more health, conscious kind of health. So they're really in a dilemma of what to do. So keeping these three personas in mind, let's look at how a human being would solve the problem. One of the ways that a human being could actually try to solve this is to look at what has worked in the past. So for Jessica, maybe a human being could look at whether there are similar restaurants in the Bay Area and which of them have really worked and which of them have become really popular and which of them are not. And based on that, a human being can make a recommendation to Jessica, similar to Raj and similar for Raj and similar for Emily and Jake as well. Now can AI solve this in a better manner? That's what we have to ask ourselves here. So using AI, you can actually, number one, define an objective function. In this case, the number of yield stars, if a restaurant is five stars on yield, then that's a great proxy for how a restaurant is performing. So let's define our objective function as yield stars. Let's also say one year from the opening because when you open a restaurant, you're not going to get five yield stars like on day one. You might get a few clients, a few customers who would probably go back and review. And then you'll probably start from like three or four stars and then increase to five stars or 4.5 stars. So our objective function is the yield stars one year from the opening. We can also take a look at different attributes that are available from Yelp itself. In fact, if you actually go to Yelp's website, and in fact I did that personally a few years ago, on Yelp for every restaurant, we have around 2,000 attributes that are available for every single restaurant, ranging from geography to whether that restaurant has live music or not, whether that restaurant has a one pager menu or multiple pages, etc. So they do have a lot of attributes that you can actually take a look at. Now you would notice that we are already getting more comprehensive as compared to human being. Now we process all of this data. Now what do I mean by processing this data? Now when you're thinking of an AI feature, there is something called cleanup of data which might involve adjusting the data so that it can be consumed by your AI algorithm. For example, any AI algorithm actually prefers mathematical numbers as in all attributes should be some kind of a number. Now take for example the attribute of state. As in a restaurant is in California and another restaurant might be in Massachusetts. Now California and Massachusetts, they are not numbers exactly, they are words. So how do you enable an AI system to consume them? So what you can do for these restaurants, you can actually define 50 more attributes saying is it in California? Yes or no? Zero or one? Is it in Massachusetts? Yes or no? Zero or one? In this manner you can actually define 50 binary attributes for states and then set all of them to zero except for one. This is actually what you call binarizing data. This is actually going to be part of the processing of the data. So you do that, you have to process the data and then you train your model. It can be a neural network, it can be something else. And based on it, what you can actually do is two things. Whatever idea Jessica has or the restaurant that Raj already has, you can actually help them by predicting Yelp stars one year from now for them. Number one and number two, you can also tell them, hey, which features actually correlate really well with Yelp stars? So that's going to be even more helpful. And the reason why I know one of that is because a few years ago when I was a student at Stanford University, I actually did this project, this exact project. In fact, a poster on this is up on my LinkedIn profile as well. So whenever you get time, maybe go to LinkedIn, look me up, and then if you look at my profile, you would actually see this. You would actually see the poster about this project. So that is how a value-driven AI feature or a product can help a person or a customer. Now, having looked at the importance of value-driven AI and the importance of not adding AI just for the sake of it, now the next topic in this presentation, it's going to be all about determining technical feasibility. And then looking at whether the AI feature is even feasible or not. Now for this section, I'm going to assume that you have already asked yourself the questions about whether AI is actually the right feature or not. So I'm just going to assume that the AI features are the value-driven features. So that's an assumption that I'm going to make for the next section. So going forward, now having decided that you're going to add an AI feature to your product, now you want to know whether it's possible for you or not to actually build that feature or product. The first question you should be asking yourself is whether you have a clear problem definition or not. Now, this is something you have to ensure as a product manager. In fact, we talked about this in the previous section as well. Now, this first question, it's all about ensuring that the problem that you have, like the problem that you're defining, the problem that you're actually going to be solving is a clear pain point and your solution is worthy enough that a user would actually intend to seek out your product. In fact, during my own journey as a product manager during my previous jobs and even during my MBA, I've learned this really important distinction between willingness to use a product versus intent to use the product. Willingness to use a product is nothing but a user being okay to use a product but intent is different. Intent is basically what drives a user to look for your product. So you need to have that problem definition and the solution definition down pat to ensure that the user would actually intend to get your product and use it. The second thing that you should be ensuring is a clear objective function and constraint. Any AI algorithm, to be honest, if you really take out the cover, if you really dive deeper into any AI system, it's nothing but an algorithm running on tons and tons of statistical data. So AI and statistics are kind of similar. Now, what that means is that any algorithm is a mathematical function and when that algorithm is running, it needs to optimize for some kind of a mathematical quantity. So you cannot just kind of tell an AI algorithm saying, hey, give me the right recommendation. Now, whatever recommendation you're looking for, you need to also define mathematically what you're writing. So in that sense, you have to define the objective function and you have to define the guardrail as well. And then, number three, this is probably the most important. You have to ask yourself, hey, for the problem that I'm trying to solve and for the solution that I'm thinking of, do I have the data and am I allowed to use it? A lot of times data itself might be governed by regulations or it might be personal data that you cannot use. It would be unethical to use. So you have to ask yourself this, that, hey, do I have the data and am I allowed to use it? So that's basically question number three that you should be asking yourself. And then finally, you have to identify an AI algorithm. Now, when I say AI, it's not just some magical black box that fits out and first, although chat GVT kind of feels like that, but any AI algorithm actually kind of like under the hood, it actually works with a really complicated mathematical function. So for example, an AI algorithm can be a simple linear regression model or it can be a neural net or it can be a decision tree as we saw in my previous webinar or in case of generative AI, it can be an LL. So you should, you have to choose which algorithm you should be going for and you need to have this decision made earlier, like early in the process as well, because that would determine how you're going to be processing your data. So once you have completed this, then you have a secondary checklist as then you have a secondary list of things that you have to ensure. And this I'm representing as a pyramid because this is more of a descending order of importance, but every single one of them is important. So the first question you should be asking yourself is, do I have a team that can actually implement that AI algorithm? Do I have the people, do I have, do I myself have the expertise to write code that would run the AI algorithm? So that's the first question and the answer has to be yes. If the answer is not yes to this question, then it's not feasible to actually add an AI feature. Now sure, these days it's actually easy to learn how to actually implement any kind of an AI algorithm with tutorials on TensorFlow and PyTorch. So it's easy to actually get there. It's easy to ensure that the answer to this question is a yes, but you have to ensure it. Number second, you have to have a data strategy. For example, whenever you're talking about training, your algorithm on existing data, where would that existing data come from? It has to be ingested from somewhere. Now when you say ingesting that data, now you're looking at infrastructural issues. Now you're probably looking at legal and ethical issues as well in terms of which data you're ingesting. So you need to actually have all of those questions answered and you need to have that system in place as well. That can not only ingest data once, but it can keep ingesting new data so that you can actually keep training your algorithm again and again. So this system needs to be in place. Similarly, you have to have a feedback mechanism. In fact, this is one of the things that we are going to touch upon later on in the presentation, but you have to have a system to actually implement continuous feedback. So a lot of times your algorithm is going to make classifications and is probably going to justify a fruit as an animal. It's going to make mistakes in the beginning. And a lot of times, even when it's actually right, you're still going to get feedback. So for that situation, you need to have a clear way to incorporate whatever feedback you're getting. Number four, this is also super important. I call this out in the second step as well. You need to have an ethical, legal, and a security framework. Now these are ethical and legal, can be clubbed together. Like you need to make sure that the data that you're ingesting does not violate anyone's privacy. And it's compliant with the laws of the land as well, GDPR, and a few other regulations in California as well. GDPR is Europe. So you need to make sure that your data ingestion and your data handling complies with those laws too. And then finally, security. As then, you need to make sure that whatever system you're storing the data in, that system is secure. Because these days, data is called people want data. So that means that if you have data about your customers, about your users, then it's valuable and you need to secure it. Finally, number three is you need to make sure that whatever AI you're creating does not become technical debt. It needs to be something that can actually integrate with the rest of your app. Because technical debt is more, regardless of whether it's AI or not, it's always hard to handle and maintain. So you need to make sure that whatever feature you're thinking of is not going to become technical debt. If you want to know more about what technical debt is, I highly recommend googling it or asking chat GPT, obviously. But yes, you need to make sure that your AI algorithm does not become technical debt. Now, let's understand all of these questions for the next example. Imagine that you work for a fruit bar or a fruit juice bar in a really hot region. This could be Texas. This could be India too. So you're selling juices and your challenge is to actually predict which juices will be the hit of the season. Now, a lot of you might think that, hey, if I'm an experienced juice seller, then I would probably know already just by instinct which juices would do well. And that's true. But imagine that you are just like Jessica and Raj and Emily and Jack. Imagine that you're also new to this business and you want to basically, you want a way to predict where juices would actually be the hit of the season, depending upon the season it is. And imagine that you've already checked the first part. Imagine that you've already ensured that in this case, AI is going to be a value-driven useful tool. So you've already decided to add some kind of an AI feature. This is where proof of insight as a product would come into picture. Imagine that it's going to be an AI-powered solution that would predict top-selling juices based on different attributes like weather, local events, customer abuse, and market trends. So this is going to be an AI feature that does all of these things. Again, going through the same checklist. Number one, a clear problem definition. So with the abundance of proofs and with the abundance of options in terms of which juices I should be bringing in. So the clear problem definition is to make an informed decision on which juices should I prioritize. So that's basically the problem. And in this case, the objective function. Really, what I really want to maximize is the fail of the juice. So that becomes my objective function. And the constraint would be the available for proof stock and prep time. Similarly, the next question is the data and the permission. So this application can maybe ingest data from local food harvesters. And it can also probably get sales records from different juice bar chains with their permission. And it can probably also take in user abuse for these different juice chains. And this data ingestion needs to always be within legal and ethical boundaries. So once you've ensured that, then, of course, you would have to actually identify the model or the algorithm that fruitful insights would use. Mind you, fruitful insight, like in this hypothetical case, fruitful insights as a product does not exist yet. You are the product manager who's actually thinking about building it as an AI feature. So the fourth question you should be asking yourself is, what algorithm would be the best in this case? In this case, we will probably use a record neural network. So yes, that's what you could use. Now, the secondary checklist to ascertain whether an AI feature like fruitful insight is possible for you or not, is to, number one, ensure that you have a team. These people are hypothetical. I just added them to show that you have to have a team in place. Similarly, for implementing fruitful insights and implementing its AI algorithm, you need to have some kind of a real-time feedback from sales, like you need to keep ingesting more and more sales data. And as you're ingesting more and more sales data, you have to ingest more and more data on attributes and what kind of fruits actually, like what kind of juices made or what kind of fruits are actually selling more than the others, and also what geography, so that you can actually keep training your model. Now, the third thing that you should be ensuring is that you have a feedback mechanism as well. A lot of times your algorithm is probably going to make mistakes. You're probably going to get a lot of feedback from juice bar owners and you have to incorporate the feedback into your system. So you need to have that system in place before even starting to work on it. Number four, and this is actually one of the most important things you need to ensure that whatever data you're ingesting, it's within the ethical framework. For example, you're ingesting sales data from different juice vendors for whom you're building the fruitful insights product. So when you're ingesting their sales data, ensure that all data is anonymized because you're essentially getting customer data, as in you're essentially getting data off people who are buying those juices. So you need to ensure that the data is properly anonymized. And a lot of times you probably should not even be ingesting it. Maybe metadata is good enough. So you need to ensure that whatever you're ingesting is within ethical, legal framework. And at the same time, you need to store that data in a secure location as well. And then finally, in case of fruitful insights, we actually said that, hey, it should not become technical debt. So in case of fruitful insights, you would want a system. We would actually want to build it a system that integrates with the point of sales systems, meaning that it can more easily ingest all of the sales data, of course, with permission from the juice bar owners. And then once you have answered all of these questions and once you have ensured that all of these points are in place, then you can move forward to actually build that feature. And hopefully for a case like this, a product like fruitful insights would be useful. So moving forward, a lot of times AI itself can introduce biases. And then a lot of times what might happen is that even though your algorithm in itself was not biased at all or did not have any inaccuracies, just because there were inaccuracies in the data, the algorithm frames on it and actually develops inaccuracies because of that data will actually explore this example in more detail. AI capabilities, they're powerful and they do come with ethical dilemmas. A lot of times misused AI can actually harm user trust and brand reputation as well. And responsible AI, on the other hand, actually has to consider user welfare, data privacy and societal impact. So that's basically the opposite of a misused AI. And these days as more and more users become more and more aware of how privacy might be sometimes in danger whenever their data is being ingested. The users themselves are demanding accountability in terms of privacy. So it makes a lot of business sense as well to use AI responsibly. And finally, and this is something that you're going to explore in the next few slides. Sometimes data itself can actually introduce inaccuracies or biases. And in fact, this is one of the most common ways an AI algorithm can actually make mistakes. So let's take an example of an AI based medical imaging app. So the purpose of this app is to ingest photographs of different, like photographs of your skin and diagnose some kind of a diagnosed skin condition. It's a hypothetical product once again. But let's imagine that its job is to actually look at the photograph of your skin and then diagnose you with, like, correctly diagnose you. That's the goal. So such an AI tool would actually often rely on existing data sets, as then it would go through maybe thousands or hundreds of thousands of photographs, which are also labeled as either healthy or with the specific skin condition that the person had. So in that manner, in that way, the algorithm actually trains on the different photographs of different skin and then it learns to classify whenever a new photograph comes along. So that's basically all of what the AI algorithm would be doing. But in this world, we have a lot of diversity. So when the AI algorithm is being used in real life, it will actually encounter a diversity of colors, skin colors. And ideally it needs to be able to make correct predictions every single time. But that does not happen. In fact, a lot of times in the training data, you might have one demographic that's overrepresented and one demographic that's completely underrepresented. So for that underrepresented demographic group, the AI algorithm is going to make mistakes. And in case of any medical issues, a misdiagnosis can actually lead to disasters. So it's extremely important that something like this is actually mitigated in the beginning. And yes, such biases can actually lead to health disparities with certain demographics even not receiving proper care just because the AI system did not diagnose them properly. So what is the reason for the gap in this data? Like what is the reason for an overrepresentation of one demographic group over the other? A lot of times traditional medical resources from textbooks to image banks, they have only a limited set of photographs. And a lot of times those limited set of photographs are overrepresented by only one or two demographics. Similarly, there has been a historical focus on fair skin samples as well. That might also skew the trajectory of this tool. And then finally, when you have a lack of representation from diverse skin colors, then of course when the AI algorithm actually finally comes across this one, then it gets completely thrown off. And of course to mitigate this, you need to have strict oversight and also validation protocols so that these biases don't persist. So what's the solution to this? Like how do we actually ensure that we have diagnosis using AI that's completely inclusive? Of course, we diversify our training data. That's basically the first thing and the most obvious thing that we need to be doing. And then we need to engage with different communities globally to ensure that we do get all of those samples from a diverse range of people. And then finally, our validation process needs to be strict and it needs to be robust. And of course, we need to actually ensure that our healthcare professionals are also aware of such potential biases and misdiagnoses of the AI systems for a whole range of demographics of people. So even the healthcare professionals like doctors and nurses, they need to be aware so that they can actually compensate for this. Thank you for hearing me out. So in this presentation, we covered four topics. We first defined what a product is and then we looked at why it's super important to ensure that if you're adding an AI feature, it's value driven. Why we should not be just adding an AI feature just because it's AI. And then we explored the different ways to figure out whether in your case, if an AI feature is even feasible or not. And then finally in the last section, we looked at one way to mitigate the biases that you might have in your AI algorithm. Thank you so much everyone for joining and thank you so much for giving me a chance to speak to you.