 Hello, today we will talk about what it means to be an AI ML product manager. My name is Sai Bandaro, I'm a product manager with Microsoft. My role in Microsoft is in the Azure Identity Team. I work with developing the identity capability that Microsoft offers. It has several components like multi-factor access, user signals and application, and various other features and relies very heavily on the latest technology. In terms of my background, I started my career as a software engineer with IBM and then I spent several years in management and security consulting, worked with companies like Deloitte and DY, and finally made my transition as a product manager recently. So as a PM, I worked with Facebook and now I'm working with Microsoft. Before Microsoft, or even like starting my career, I did my bachelor's in computer science. Later on, I came to the US and I did my master's degree in technology management. So I kind of was able to get a mix of both engineering and management in my education. So today, in terms of the agenda, I plan to cover four big topics. The first one being general primer on what the AI ML industry looks like. And then the second piece of the agenda focuses on the product manager role and how the team structure is slightly a little bit different when we are talking of AI ML products than your typical product team. And finally, we'll talk about some responsible AI features. Really, when we are developing products and we are trying to automate things, there's a lot of considerations that we need to make in terms of being responsible and fair and not creating problems. So starting with various AI and ML used, it's pretty much used everywhere. And in today's world, it touches many aspects of our life. Netflix looks at your past history of what you watched, what you liked, maybe your watch list, and then it recommends movies for you to watch next. And maybe if you click on it and you like it, it learns from it and it would recommend more such movies for you. Similarly, Facebook focuses its ads. And we all know there's a lot of machine learning involved there. Google has several products that use machine learning. For example, YouTube uses it for recommendations. There is Google Translate, which is used for natural language processing. Tesla is using artificial intelligence for cars. You can use it for the autopilot mode. And the cars are to some degree able to self-drive themselves. Besides that, there's even government agencies like IRS, which are trying to use AI to detect fraud and find people who are trying to do the wrong things. There's also AI everywhere in video games. Minecraft is one that's like virtually every video game has an AI component in it. It also has several applications in healthcare research and you name it. So before we start, I wanted to quickly level set on a few important terms when we are speaking of artificial intelligence and machine learning. The first one is what do we mean by artificial intelligence? So it's really a larger discipline where we talk about intelligence demonstrated by machines. When we speak intelligence, we typically think of human beings or animals using their natural intelligence. But this is more about systems and how they learn and how they produce results that are sometimes useful to us. The second terminology I want to talk about is intelligent agents. This is nothing but essentially any system that can kind of look at its environment, perceive that environment, learn from it, and take actions. That would essentially maximize the goal of that agent. One very simple example is a thermostat. We wouldn't really think about it being an intelligent system, but it is. It really looks at the ambient temperature and based on it being high or low, it can automatically adjust and make sure your house is warm or cold as you like it. And the final term I want to talk about is machine learning. Machine learning is simply a subset of artificial intelligence. The goal of machine learning is very specific that is to look into past data, train or learn from it, and then produce some outcomes that are meaningful to us. So it is kind of helps us accomplish artificial intelligence, but it is only like a subset of AI. Most of my presentation will be focused on how we can use machine learning in intelligent products and what the PM role will look like. So coming to machine learning, there are very few simple concepts that we need to know. It all starts with data. That's why it's very important. It's kind of the cornerstone for training models or for systems to learn. So having good data, good labeling is really important. And then the data is fed into something called a model. Model is nothing but an algorithm. There are several different statistical algorithms, predictive algorithms that can kind of predict an outcome based on the data you feed. And things that you tell it are important. For example, if you're thinking of Netflix recommendations, you could have an algorithm that can do some sort of analysis on past data. And you could tell it that, you know, maybe things like what the person has, you know, persons like past watch history, maybe their watch list, things that they probably know over long enough. As things that would be relevant, you know, are important factors to account. So a model would like take that into account and produce some results with probably some sort of a probability assigned to it, which says, you know, the likelihood of someone liking a certain movie or a series. And the final step is really validation. Okay, now we produce this result, but we really don't know whether it is, you know, relevant to the person or not, you know, for the good recommendation or not, how can we learn from it. So that's where we talk about validation and a feedback loop. So essentially if the person actually looks at that recommendation clicks on it and watches the entire movie. It's positive validation that the model worked. On the other hand, if the person dismissed that recommendation or said don't recommend movies like that. It is, you know, essentially a negative validation. So the model will learn from it and improve and hopefully show better recommendations. And then apply the same, you know, data and model based approach to solving other problems. For example, this is used a lot in identity fraud analytics in many products like Google sign in Microsoft identity as well. Essentially, you can look into like user sign in data, for example, what places they're signing in from the browser they're signing in from and several other factors. And then the model can, you know, use those factors and, you know, predict a probability on whether that sign in attempt is valid or not. If it suspects something, for example, if I signed in two hours ago in US and then two hours later I'm signing in from and from Australia, that is going to be a red flag. So then it has to do some sort of a step up or validation before allowing that person access. So essentially it's kind of, you know, you can see the same working model. And then you can talk about how your role of as a PM will be structured. The focus of a PM is still on the primary goals, which is basically improving the product right you will still be defining metrics, strive for a better user experience, you know, be that customer voice, help with prioritization, like new value within your product. So all of those fundamentals of the PM role are same. What is different is you need to be a little more convergent in how machine learning works because you need to work with a lot of people who made these products happen. And some basics are like kind of looking into the types of machine learning algorithms that are various regression based algorithms like came in and then came in clustering and you know all such kind of understanding the various machine learning algorithms, at least like the top five, you know, the commonly used ones and the trade offs, learning about sort of, you know, supervised and unsupervised machine learning how you can train models would be great as well. And finally, you know, it's like weekend activity you can, you know, go and practice at Kaggle.com it provides so many tutorials so you can probably spin up a quick project and play around with it that that would really give you the confidence to do better at your role. The next section is around how AI ML teams are. If we are talking about a typical product team, you would have your product manager at the center. And your main counterparts are going to be from the engineering probably five to 10 engineers to one product manager is like, you know, a common ratio we see in the industry. And besides that, there are like other supporting functions like documentation design legal marketing. Now coming to an AI ML product team, along with your engineers who build stuff. You also have a team of data scientists and the split of, you know, your counterparts is going to slightly differ because a lot of your time kind of work also goes into working with the data scientists and improving the models. You will probably not work as much with the engineers it's going to be a, you know, an even split more or less. And rest of the other supporting functions are going to be the same. The next section I want to talk about is responsible AI. So previously, initially we talked about how AI has so many applications right like self driving calls recommendations engines and all of that. And with all that there's also a fundamental aspect of responsibility that we need to think about as PMs. For example, a lot of the times the same PM business goals, like user retention or user engagement, increasing revenue, often have some undesirable consequences. For example, if you're a social media platform and if user retention and engagement is your primary goal, you would probably create content that you know, engages them whether it is bad or good maybe you know keep throwing some more extreme content at them and you know, sort of end up in a polarized demographics somewhat. There's all this unintended consequences you can have or if you're developing a game and if your main goal is user engagement, but then if the game is so addictive that people are not leaving their homes. It's, it's pretty bad as well, and it's not in the best interest of our customers here, promoting unhealthy behaviors and generally unhealthy world. The most undesirable consequences to be aware of like bias. If a lot of the time, you know, your training data is biased towards certain group of people maybe you know the outcomes of the machine learning will also have some bias as well. So in polarization, we also talked about that, which is, you know, a problem in the social media privacy infringement. You know, there are limits on what AI should know and learn from. For example, it is not okay for you to, you know, infringe on people's health data and then make, you know, recommendations out of it. There are a lot of ethical issues with a big problem. There's several social media sites that have undesirable effects on teenagers and people having problems with their body image and such. And finally addiction that's, you know, that's there in social media or even like video games and a lot of other products. And to define a framework and, you know, hold yourself and your product in accountable to that. This is one sample framework but you can definitely define what works for you and what you think is important. But a few aspects that I want to highlight here are one, your product should be or your machine learning outcomes should be interpretable. For example, you know, again, coming back to the social media example that I given before, if sometimes like I think of something in my mind and somehow that shows up on my newsfeed. And I'm like wondering how did that happen, right, it doesn't offer any explanation other than, you know, keeping me out. So, that's that's one like good example. The outcomes of machine learning, you know, in your product should be interpretable, you should be able to connect how they got those recommendations even if it is not directly obvious maybe you should tell them how you derived a conclusion if you're recommending them something or, you know, asking them to do something. Second thing is fairness. You want to make sure you are able to train your data with appropriate representation from everywhere. There could be problems with bias in your outcomes. The next big aspect is safety. Which kind of course with the polarization consequence I talked about, you know, or even sort of people feeling, you know, bad about their body image or you know, driving people into suicidal thoughts or even sort of compromising someone's location to, you know, potentially a bad actor, and you know, compromising safety of a person. So, and you can think of it in any way right if you're thinking of a self driving car it could be potentially making sure the experiences accidentally and you need to be really really sure when you're trusting a car to drive itself. The next big aspect is being compliant. So GDPR, the European Commission has dafted a lot of legislation around how companies should use and protect user data. So people should be aware of those applicable regulations even if it is not applicable it is good to kind of build your products so that, you know, they, they are built according to like good practices and frameworks that are already recognized and in use. We don't want our products to be addictive. A lot of the times you could have some interrupts so that people take a break and they're not hooked on to your product forever. And final one is obviously privacy. We want to make sure people sell data or personal information is protected it is not compromised and not used other than in an authorized way. We don't know how their data is being used. If you're interested in more responsible AI frameworks, you can look into the framework from European Commission. Microsoft has a nice one as well. Google has published a pretty detailed one. So I provided links here, feel free to take a look. Now to summarize, kind of have three main points or takeaways. One is, it's always good to have some understanding of how machine learning works so that you're conversant and much more effective in your role as a product manager of an AI product. You can quickly try out tools like Kaggle.com and, you know, learn a bit. You don't need to be an expert or a data scientist, but it's always good if you know it. A lot of the time this is also a good entry point in general like if you're already a data scientist or if you know, you know, very well about machine learning. And, you know, and if you're looking to transition into product management, AI ML products are like a good pathway for that. The second takeaway is to understand the differences and how the product team is structured and how the product life cycle is also different than a typical product. In AI ML, there's a lot more experimentation, there's a lot more involvement of a data scientist. And that role becomes much more important. And that's like an important counterpart of the product manager. I'm really pleased to incorporate responsible AI into your product, make sure you have a framework if your company hasn't defined one. And it's always helpful to create better products, make sure users trust your products and, you know, you're creating a better world in general. It was great. Thank you for the opportunity to speak.