 My name is Rishabh Gore. I work as a technical architect in Microsoft. Let me get my system ready, sharing the screen, perfect. Okay, so today we are here for a very interesting topic called Responsible AI. Every organization out there, they are committed to the advancement of AI driven by ethical principles that put people first. And today we'll look at the implications of this and what are the possible repercussions for the same as well. A little bit about me, wait a second. So a little bit about me. I work as a technical architect in Microsoft. I'm privileged to be a part of a very small and interesting group in Microsoft called the Microsoft Technology Center. Internally, we call it MTC. We are very small. Like I work in the Indian MTC, the India subcontinent. And we are a very small team of four technical architects and one director for the entire Indian subcontinent. So all of us have to be a cross domain, specializing cross domain solutions. Like me, myself, I look after all of our IoT service offerings, the entire application development scenarios in Azure and the local, local offering in Power Platform. So in my day-to-day job, we host customers day-in, day-out on deep technical discussions. So we have specialists in Microsoft who specialize in particular technologies, but whenever there's a requirement of a deep technical discussion, and this could be around architecture, this could be around designing, this could be about rapid prototyping. That's where we come in as MTC. We show to our customers the so-called art of the possible when it comes to technology. You can find me on LinkedIn at GodRation. This is my last name and the first name. So, and I know it's like pretty early, day one of the DevCon. And I know it's DevCon, but my entire talk is not going to be code heavy. Too very frank, I won't be showing any code at all. So this talk is just to get your thought process started, to get your brain juices flowing if I may. So sit back, get a cup of coffee and just let's go. Like we'll get started now. So if you think about any solution, any big data IoT solution out there today, it's all about three components there. It's things, this could be your sensors, your microcontroller units, this could be anything. It's things, they are generating vast amount of data. Now, what you need to do is you need to actually work on top of the data, analyze the data to get some insights. That's the second stage. You need to get some insights out of the data by doing some sort of analysis there. Now, once you're done with the analysis, that's when the next stage comes in, you need to turn those insights into action labels. So that's stage three of the process. Now, pretty straightforward piece, because now what will happen is your actions will in turn generate more data thereby completing a digital feedback loop. Now, what's really interesting about this entire slide is this is a very, very easy concept to understand. But if you take it one step further, if you really think about implementation, that's where the entire sort of complications comes into play. That's where you really started to think about how can I go about really implementing this solution at scale? It's a very complex piece. In fact, before going into how to do this, a general question, first we need to ask ourselves, what makes AI different from any other technology and basically why responsible AI? So there are multiple reasons here. First, it's the pace of innovation around this technology itself. It's absolutely incredible. For example, in 2018, we achieved human parity when it comes to machine translation. In 2019, we achieved human parity when it comes to general language understanding. So just think about how rapidly it has transformed. This is just the last five years that I'm showing on this slide. Secondly, it's the proximity of these AI technologies to human intelligence itself and about how we personally experience the world around us. So the whole concept of sensing. And lastly, the biggest difference from any other technology is AI's path to both harm and good that is driving today's conversation. In fact, let me take you 30 years back today. 30 years back, every business out there was looking at software as a way to redefine how it ran its operations. There were systems of record, they were getting created which were able to manage every core process in the enterprise. So this could be from accounting, payroll, to resource planning, to customer management. Now, this change, it was foundational to the digital transformation of an organization. But as big a change as it was, the entire digitization of the core processes, it didn't alter the primary business of a company. It just made it much more efficient. Now, in the past decade, however, systems of record, they have been extended to the so-called systems of engagement. Now, what I mean by that is now it's redefining how companies engage with their customers, how the customers themselves use the products or buy their products and eventually how, eventually even what those actual products themselves are. So along the way, software evolved from being focused on efficiencies to being becoming a core aspect of every business out there. Actually, if I really think about it, it quickly became the part of business which where differentiation happens between your competitors. So for example, let me give you a simple example. Now think of Netflix, Airbnb, Uber or Amazon. Now, even referring to these companies as media company or a real estate company, or a transportation company or retail company, in our own head, it sounds pretty weird because they are in a sense truly software-driven companies because they understand that software is the primary function of their organization. So modern companies or modern organizations out there, they really need to actually look at software as something that's infused through every aspect of their business, a critical component of their operational efficiencies. So what has happened is, okay, this was something that we have been seeing for the past 10 years, but the full digitization of companies, it has led to a very interesting secondary effect which is the proliferation of data. The systems of record, they transformed what traditionally were paper files into digital stores where all the business data reside behind the company's core processes. Then systems of engagement, they added a vast amount of data with respect to your product usage, your customer outcomes, one of those things, the customer interactions with your products. So this in turn created the perfect environment for systems of intelligence. Now systems of intelligence, they leverage vast amounts of data generated in an enterprise to create those expert systems. Now what has changed now because traditionally the term artificial intelligence, it was reserved for describing very special occasions when the machine was able to perform tasks that are normally associated with human intelligence. Now as powerful as reporting on analytics can be, they are definitely not at human intelligence level. So what happened? Why is everybody using the term AI day in, day out now? I think the primary reason for this is the growing sophistication of the techniques available in particularly in the area of machine learning as technology improves exponentially, it allows us to compute more and more data together. So an interesting piece is in news systems of information can actually join my existing system of intelligence now. For example, I can connect any of my IoT enabled devices directly to the cloud today. So those things are already there. And because of this entire change that we are seeing today, it becomes very imperative that we look deeply at the responsible development and use of AI that prioritizes people in the center to accelerate the positive outcomes. Now let's look at what possibly can happen if it's not taken care of. So on a weekly basis, there are local headlines everywhere in the world related to the concerns regarding the use of AI. So let's look at some of the repercussions. So for example, this was one article that I was able to find focused around fairness. So this is a concern given that the African-American community, they are already over scrutinized by law enforcement in the US. Now this suggests that facial recognition technology, it's likely to be overused on the segment of the population on which it underperforms. So think about it, it's a very negative paradox here because facial recognition algorithm, they are not tested for racial bias. So this is something that can creep into your system. Another interesting piece, this one. So basically, this was a New York Times article just two years back about how facial recognition is actually accurate if you are a white guy. Now, even if you are of a different gender, even then it was causing issues. Another very interesting one is this one. Now, we see examples of, this is more of a diversity or ethnicity issue. So basically in this picture, this Asian man, he's trying to put his picture onto a system. In this case, it was a passport software and software was repeatedly declining his application saying that his eyes were closed. Now, a very important point here is this is not somebody doing intentionally something, okay? This is not, like nobody's actually intentionally putting these issues into the software that they're actually building. It's just that the software that was developed to identify whether a person is uploading a valid image or not on their passport or for their passport application. The dataset that was used for that identification scenario, it did not have a representation from every ethnicity and that was causing this issue. Another example, and again, this is something that you actually will pay. So this is more of an NNP example of Turkish language. Now, Turkish is a very special structure when it comes to their language that is their third pronoun. It does not have a gender. So when we input a sentence to the third pronoun to a machine translation system, so this could be Bing translator, this could be Google translate. So in this case, the translation software, it needs to map the third pronoun to either a he or a she. Now, what we see is that the algorithms, they are trying to put their own biases in these translations. For example, as you can see on this screen, a doctor or engineer was a he, but a nurse was a she. Again, nobody deployed this, but it just happened because the dataset that was used to train this, it had more representation from that sector. The last one, again, this is more about accountability, very, very crucial. Like for example, when a driverless car crashes, who gets the blame and who is going to pay for the damage? So as you can see from all these examples, if ethical checks and balances, they are not in place, it can lead to a lot of issues without us ever even realizing that we are doing it. And we can be the developers who are going to develop this. And in fact, that is why responsible and ethical practices, they are crucial from the grounds up. So in the next, I think we're done with 10 minutes. I have around 20 minutes and then we'll take a 10 minute Q&A. So in the next 20 minutes, what I'll do is I'll show you the approves that Microsoft suggests and what I have personally learned from my interactions with customers in the Indian subcontinent day in, day out. And again, whatever you'll see, I work very closely with the engineering team in Redmond and it's something that we have been doing from the grounds up globally across our different subsidiaries. So when you talk about responsible AI from a Microsoft perspective, it's a built down into four different pieces. I'll talk about three of them. The first one is to have an AI strategy that top down approach. Second one is to enable an AI ready culture. This is where we talk about having that culture from the grounds up. The third one is around having technology which is able to do that which is something that I will not be covering today because again, technology, we all understand that. We all understand how a rest networks or how image networks. But the fourth aspect which is very crucial is where I'll talk about, like I spent significant time there and I'll talk about the six principles that we have announced as Microsoft. So that's around responsible interest for the AI. So let's go into the first one which is about defining an AI strategy. So again, the whole idea here is very simple, very straightforward to enable true AI transformation in your organization. As Microsoft, we believe that you need to bring AI to three different pieces of your organization. So the first one is you need to bring AI to every application out there. Now a simple example that I can give you is of the organization can add a simple chatbot on top of your existing website or an application that interacts with users in more natural ways. Enabling you to so-called, to create a very involved customer experience and help your employees maximize their time as well. Or it could be as simple as adding a predictive analytics to a range of different applications, making it easier to plan ahead or decrease the operating cost. Now, while this is great, but for true AI transformation, you can't limit the impact of AI to your applications only. You have to bring AI to every process, be it internal, be it external. A logical place to start on this is to empower your technical development teams. By giving them the tools, they need to quickly and easily leverage. They can help bring AI to your processes and bring you major results. For example, enterprises today, and I have talked to them, I have talked to the CDS of multiple organizations who are already doing this. They are leveraging intelligence solutions to, let's say, help their marketing teams monitor their brand by tracking user feedback or improve their seller efficiency by prioritizing the so-called lead generation or help their finance or operations to reduce cost and optimize their operations by data-driven insights as well. So having AI in every single process, that's very important. And lastly, you can't transform by bringing AI to trust your developers. Every department needs to partner with your developers. In fact, if you read any of the gardeners' latest high trends for the next decade, citizen movement is very key there. So the whole idea here is, how can I democratize AI to every single employee? How can I bring AI to my business user who does not understand computer programming? How can I enable him to use a sophisticated train model? And again, the entire premise of transfer learning actually comes here or giving your team explainable AI models comes here. So all of this is on the verge of how to enable AI strategy organization. So from technical employees to non-technical users, we believe AI will give them the power to transform how they work and thinking more innovative based than before. So next, let's look at, next we want to discuss the qualities that characterize an AI ready culture and demonstrate how change management can make this culture transformation a reality. So you might ask, okay, what exactly makes an organization's culture AI ready? So the whole idea here is from our perspective, fostering an AI ready culture, it requires three different pieces. First, it requires being a data-driven organization. And it's not only about accessing the entire data state, it's not about creating those data lakes and then dividing them into different data warehouses and then going to data maps. No, it's about ensuring that the data in that data warehouses of absolute highest quality. It's about ensuring that you have the best and the most complete data as the foundation for your AI systems, which because of paramount importance. Second piece is to empower your people to participate in the transformation, the so-called democratization of AI and to create an inclusive environment that follows cross-functional, multi-disciplinary collaboration. I mean, the whole premise of the citizen movement that Gartner has said is around the fact that your business user should not be dependent on your developer. It should be a team sport, it should be a fuse team. So that's what it is. So fundamental to empowerment is enablement, giving people the space, the resources, the security and the support to improve what they are able to do with AI. Now empowerment also requires allowing room for errors, encouraging experimentation and that's something that we are doing. And in fact, just after this slide, we'll go into how exactly I personally at your house, if we have approached customers in the Indian subcontinent or in the Asian territories when it comes to driving these changes. And you will see what are the repercussions of those as well. So I'll show you some interesting case studies there. So lastly, it's about creating a responsible approach to AI so that you address the challenges that are questions, you address the challenging questions that AI presents in front of you. So from our perspective, this third key element of an AI agriculture is to foster a responsible approach to AI. Now as AI continues to evolve, it has the potential to drive considerable changes to our lives, raising complex and challenging questions about what future we want to see. So these are questions that deserve a dedicated discussion. So let's go ahead and explore them more deeply. Before that, I'll just look at the chat window in case there are some questions. They are known, it's absolutely fine. I see there are no questions, it's good. Yeah, you can post your questions as Haimath has already mentioned. I will respond to them, but we'll have it dedicated to you at the end as well. Again, this is a very interesting discussion. You can, your questions could be around anything to be very friendly, it could be around app dev, could be around GitHub, I'll answer all of those, no worry. So with that, let's move forward. So yeah, so last piece, and again, this is the most central piece and this is where I'll show you something very interesting now. So I got a confirmation about my speech at the event sometime in, I think in December and at that point of time, I started preparing for it. So I went through three, I went through multiple articles in Howard Business Review and this is a very interesting piece. Now, look at these three different articles. All three of them, they are actually talking about the repercussions of responsibility or what if ethical practices are not there in your AI. All three of them, they are actually posted within like two months, the last two months and look at this interesting piece. All three of them talk to different aspects of your organization. Is ethical principle, getting ethical principles in your system is a solution responsibility? Is it a strategical decision or is it something that should be done with technology? Again, the entire premise is people don't even understand today. When we are working on any AI model, our approach is to get the maximum accuracy, get the best results, but we don't care about these things. And in fact, I'll show you what actually happens where you don't do that. So as Microsoft are AI journey, so in light of this entire responsibility, we have seen that organizations, they are finding the need to create processes and structures to guide their internal AI efforts, whether they are deploying third-party solutions or developing their own solutions as well. Now, we also recognize that every organization will have their own beliefs and standards in their AI journey. So what I will do in the next 10 minutes is share our principles and then we'll straight away open for question and answers, okay? So Microsoft, our journey started in 2016 as some of you might be familiar. We put forward six principles that guide our development and use of AI. These were fairness, reliability and safety, privacy and security, inclusiveness, accountability and transparency. So next, we put these principles into practice and then finally we use tools and resources that makes it easier for developers and data scientists to identify and mitigate potentially harmful resources or issues in their process, in their building of the data science life cycle process. So let's talk about these guiding principles. Okay, so first principle is around fairness. So for AI, this means that AI systems should treat everyone fairly and avoid affecting similarly situated groups of people in different areas. So basically the idea here is, for example, when AI systems are required to provide guidance on, let's say, medical treatment or loan applications or employment, they should make the same recommendation to everyone with similar symptoms or similar financial circumstances or similar professional qualifications. But unfortunately, because AI, as I mentioned earlier, because AI is designed by humans and it's trained using data that reflects our imperfect world in the world in which we live in, they may reinforce those biases. So as Microsoft, I mean, we encountered this when we were talking with a large financial lending institution in India and they were developing a risk scoring system for loan approvals, okay? And what happened was, we trained an existing industry algorithm using the customer's data set. And then what we did was we actually trial the system in a pilot program with the customer. And in fact, what we did was we had a side-by-side proof of concept with human loan approvals as well so that our pilot program can be validated as well. Now, when we conducted the initial audit of the system, we discovered that yes, the system was very well because it was giving loans to only low-risk, so basically only approving low-risk loans, basically. But interestingly, what we were able to find was only approved loans were for male borrowers. There was not a single female borrower there. And why was that happening? Because the training data for this, it reflected the fact that loan officers, they historically favored male borrowers. And again, the idea was this data set was like 200 years old. The company was established since the 1800s. And inspecting the system, it allowed us to actually identify this and connect that bias before the system was actually gone into production or into deployment. So, okay, what are the recommendations from our info to make sure that fairness is taken care of? First is we recommend that you understand the scope, the spirit and the potential uses of any AI system. Next is you ensure that the design teams, they reflect the diversity in the world. That was one of the major issues which caused this one. Third, you can identify and mitigate bias in data sets by evaluating where your data is coming from, understanding how it's organized and testing it to show its true representation. Next, you can also identify and mitigate bias in machine learning algorithms as well. It's not just that data will always give you those biases and this is where intelligibility, and this is something I'll cover in the later slide as well, intelligibility of a particular training model is also very important. And lastly, you should leverage human review, which is something we did in this scenario as well. So, human review by trained employees to understand the meaning and implications of the AI research. It becomes extremely crucial when you have to use AI to inform consequential decisions about people or wherever human beings are involved. So, again, very important concept around fairness and what can happen if it goes wrong. The next principle is around reliability and safety. To build trust, it's important that AI systems, they operate reliably, safely, and consistently under the normal circumstances and in unexpected conditions as well. How they behave and the variety of conditions they can handle reliably, it largely reflects the range of situations or circumstances that developers can anticipate. So, basically, that's where the bias comes in. If you're not even thinking of a particular scenario, it won't be there in the model itself. So, again, the example here was, this was with a Southeast Asian mining company. They were designing an AI system to identify if their drivers were alert when operating heavy mining machinery. And that's where we actually had to implement a very staged implementation process so the system can be thoroughly tested before the large-scale implementation. Again, what you need to take care of here is, one, you need to understand your organization's AI maturity. Second, you need to make sure that because it's Microsoft, we recommend developing practices for auditing AI systems. If you can use that, and again, I was just, I'm talking about Microsoft, but again, Amazon is doing the same as well, Google is doing the same as well. So, at the end of the day, the industry leaders, as industry leaders, we are doing one of these things so that any organization out there can use this. Third, you should evaluate when and how an AI system should seek human input. It becomes extremely crucial in almost every scenario. It's not actual replacement, it's a collaboration. So that's very crucial. And finally, we recommend developing a robust feedback mechanism for users to report performance issues. And again, if you think about it in a very traditional sense, feedback should be eventually reaching the training algorithm stem sets so that it can retrain the model and go back. So that's our reliability and safety. Next one, a very, very crucial piece. It's crucial to develop AI systems that can protect private information and register tax. And again, I'll show you a very, I'll tell you a very interesting case study here. So the whole idea here is AI becomes more prevalent in the industry today. Securing important personal and business information is becoming more critical in complex. The privacy and data security issues, they require especially close attention for AI because access to data is sort of essential to AI systems and hence it becomes very crucial. So the interesting example here is, this was Microsoft's own, so we released a chit chat bot on Twitter. It was called A. So basically we taught the bot, the chat bot to learn from online interactions so that the bot could better replicate human communication and personality traits. But what happened was, and this was something we did two years ago, but what happened was within 24 years of the deployment of that particular bot, users on Twitter, they realized that the bot can actually learn and they begin to feed the bot with electronic. So turning that entire bot from a very polite bot to a weaker for hate speech and we had to remove that. So again, here the important pieces are you need to comply with relevant data protection, data privacy and data transparency laws. This could be GDPR, this could be the California Privacy Act and all of those things. Second, you need to make sure you design AI systems that maintain one, the anonymity and second, the integrity of personal data. And lastly to protect the AI system from bad actors, you need to make sure that they are designed in accordance with secure deployment and moreover, they should be designed to identify abnormal behaviors or to prevent the manipulation and malicious attacks that can happen on that particular pace. In this case, the chat bot that we built. The next principle is inclusiveness. And again, there are already, I mean, we know that for a fact that there are around one billion people with disabilities around the world. And for them AI technologies can be truly game changer. AI can improve access to education, the government services, employment, information, all of those things for them. So intelligence solutions, like for example, a simple solution for real-time speech-to-text transcription or visual recognition services or predictive text functionality, they're already improving or empowering those with hearing, visual or other impairments. So in this case, we actually were working with the Australian Department for Human Services and the DHS there to build an AI system that could augment the so-called overloaded call center operators. Now, with inclusion in mind, we conducted user research and we discovered that some citizens did not have the ability to call the DHS due to either because of their disability or because of the lack of phone service, meaning that they were, they could only access services by in-person appointment. So what we did there was, again, making sure that we created a multimodal, multi-channel, intelligent chatbot that can not only improve accessibility for those groups, but also provide a more convenient experience for every single individual. So these are the four principles. Now, underlying these principles are two foundational elements which are essential for ensuring the effectiveness of these four are taken care of, which is transparency and accountability. When we talk about transparency, whenever AI systems, they are used to help inform decisions that help, they have tremendous impact of people live, like people's life, for example, in the healthcare sector. It's critical that people understand how these decisions were made. So a crucial part of transparency, as I mentioned earlier, is what we refer to as intelligibility or the so-called useful explanation of the behavior of AI systems and their components. So again, in this case, we actually partnered with a large healthcare provider in India to develop a more accurate risk model for detecting cardiovascular diseases, cardiac diseases. And we had around 32,000 patients' records and we were able to build a risk model that was significantly more accurate than the model that was used previously. But again, what was important that we wanted to actually ensure that the providers, they could understand how the system scores patients one. So basically what we did was we worked together to create an interface that explains the result of the so-called explainability of a particular model. So the interface uses three categories, dietary, medical, and activity. Again, not to go too much deeper here, but the whole idea is it showcases the healthcare providers how it scores patients, enabling providers to develop the most effective treatment plans for their patients specifically. The last principle, again, this is the second last slide that I have and then we'll open up for question and answers. So the last principle is accountability. We believe that people who design and deploy AI systems, they must be accountable for how their systems operate. The need for accountability is particularly crucial with sensitive use cases, for example, facial recognition. Now, it's a very interesting piece because recently we have seen there has been a growing demand for facial recognition technology, especially by the law enforcement organizations who see a lot of potential use cases of this technology, like for example, finding missing children. Now, however, as Microsoft we recognize that these technologies, they can potentially be used by a government to put fundamental freedoms of people at risk. For example, enabling continuous surveillance of specific individuals. So to that end, we have publicly called for regulations on this. In fact, we have, we have, we are like very vocal advocates of this, that this should be a good issues when it comes to any technology out there. So it's important to recognize that facial recognition technology is not like, it's not going to be the last technology with sensitive use cases or corner cases. It only serves to highlight the importance of one, remaining vigilant and second, accountable for faults and harmful uses in all the emerging AI cases, use cases that we'll see in future as well. So at Microsoft, we have developed these six principles to guide our use of AI with the aim of respecting collective values when helping society realize the full potential of AI. We encourage organizations to do the same as well. From holistic, like from holistically transforming businesses or industries to addressing critical use cases or critical issues that are facing humanity, AI is already solving some of the most complex challenges that are out there and redefining how humans and technology interact with each other. So with that today, what I did was I, I can have very brief 32 minutes, I am outlined some of the steps that we have taken or we are taken to prioritize responsible AI. And in hopes that our experience can help others as well. However, we recognize that we do not have all the answers. It's very true. For example, the Twitter chatbot that we created. And every organization out there, it has to have its own beliefs and standards. As organizations and as a society, we are steps to a responsible AI. It will need to continually evolve to reflect new innovations and lessons from both our mistakes and our accomplishments by engaging with AI in a responsible manner. We can ensure that it fulfills the promise to create a better future. So thank you all for attending this session today. The next step for all of you is to share what you learned today within your organization. So as to discover what opportunities AI presents and for you yourself and for your business. And now with that, we'll open up for questions. I see there are a few questions. None in the Q and A, one in Okay, one in the chat window. Hey, if there are any questions, maybe you can ask me those, it's okay. Well, Rishabh, I could see one of our attendee has raised one question. Are there some open source projects following the guidelines you're talking about? I mean, if you have any idea about any open source project, which is also following the guidelines that you have described during your session, it would be great information for the person who is looking for it. Okay, like at the top of my mind, I don't have an answer for that question, but I'll be there in the discourse over it. I'll make sure that if I find some link, I'll share those. Whatever I discuss today, it's more or less around case studies between our customers. Again, because I personally work with customers day in, day out, so that's what I had. But yeah, if I am able to find some answers, I'll make sure I share those links with you. Yes. Hey, Monty, how are you? As we wait for another question from our audience, I have one question. You brought up a really interesting case study where the loans in India, your training data was mail-oriented and therefore your AI models were mail-oriented as well. Thankfully, you looked at the data and saw that it was lopsided and it wasn't approving women. But what are techniques that people can use to understand is there bias in their models, whether it's sex or age or whatever? Yeah, so pretty good question, Eric. Basically, one key aspect is around explainability. And again, every big organization is actually working on this. All the trained models that are out there, they have that component, they are okay. If my model is giving some particular result, why exactly is that result coming in? That's one beautiful way to actually think about actually learning it from machines. The second piece that I mentioned in my talk as well is about having checks and balances, having some humans in the loop who can make sure that, okay, when my system is going into pilot, when it's at a proof-of-concept stage, at that time, if you have those checks and balances before going into production. Because I mean, our chatbot, it went into production and then we faced those issues. It was actually on Twitter and that's when we realized people can also learn that it is learning. It can be taught to speak in a different way. And it became a disaster for us as an organization. So would one technique being make sure that those human checks are part of your test plan and you validate? Absolutely. Think about it. Absolutely, but at a massive scale, it's just not possible to have human checks and balances when it comes to machine learning. If it was a typical software developer lifecycle, yes. But when it comes to machine learning models, it's not possible to have human checks and balances at every stage. And that's where it becomes important that you reevaluate all the decisions that your AI algorithm is taking. And that's where explainability becomes key concern then.