 Pierre, artificial intelligence, like operations, is in everything these days. We, you know, from the way we work to how we go about our daily lives. Just look at what I get just told us. There's an AI at the back that's analyzing all that security information to find those holes. Exactly. What the biggest question now is, is artificial intelligence being used ethically? Right? Oh, I don't know. And that's what we're going to discuss in the panel today. So I'm going to drop you out and we're going to pull in our panel. There we go. I love technology. All my friends come and join us. So quick around the room, if we can, just introductions. We'll start with Sarah. Hey, I'm Sarah Bird. I lead responsible AI for the cognitive services. Next up, Venki. Hi, I'm Venki. I'm the head of product for Azure Cognitive Services. Hi, everyone. I'm Mira Lane. I'm the director of Ethics and Society. And we work with Venki and Sarah on deploying responsible innovation across the system. And hey, I'm Josh Lubjoy. I lead design for Ethics and Society. Now this conversation that happened during Ignite caused such a stir that we had to grab another 30 minutes. A lot of IT professionals are interested in participating in the adoption of artificial intelligence at their organizations and the services that they use. But because of the fact that IT professionals are always seen as reactionary, they're never able to put their best foot forward to properly help incorporate artificial intelligence. And the whole discussion around ethical AI use makes it so that this opportunity allows for IT professionals to not be the gatekeeper, but to be the heralds of the proper use of AI. So we're going to start in this panel, we're going to start with Venki. You've been with Microsoft a long time as a PM and then following your trajectory, now the head of the product for Azure Cognitive Services, where you lead a portfolio of AI-enabled cloud container services. We deliver world-class AI in speech, vision, language, and decision categories. When did responsible AI become fundamental and important to engineering? Great question. So, you know, my goal as a head of product is to sort of first build great product that is really helping our customers. And the way we sort of know we are helping our customers is by driving adoption. And so, as you mentioned, sort of adopting AI at scale is a big endeavor, but also brings up a lot of interesting questions around sort of, you know, how do you sort of use the response? How do you use it? And then again, how was it developed? Like what was done? So there are a lot of questions both in the technical community as well as society at large. And so to me, there's only one right answer, which is we need to just dig in and start doing the work so that people can really trust the AI that we're building such that they can use it in their solutions. So, and that's interesting is, you know, we hear a lot in terms of the use of AI. I've actually heard the term rub a little AI on it and you'll either save money or you'll make money. It's it's an interesting premise that people are putting forward with the ethical use of AI. What is the strategy here? What is the, you know, the work towards that organizations and IT professionals should be working towards? Is that a question to me? Sorry, yes, back at you, Venky. Sorry. Anybody panel discussions, but anybody can jump in. Is anyone else wants to take that? Sorry. Well, here's the thing, right? So so in the inclusion of AI and a lot of organizations, we see that. Oh, you know, I'm going to add cognitive services to my solution. I'll give you a perfect example. We work with the Missing Children's Society of Canada to enable artificial intelligence to deduce when a specific sentiment was carried out on conversation social media between a child and the possible adductor. And there was a lot of interest from outside groups outside of Missing Children's Society of Canada to incorporate facial recognition, which at the time we said no, because from an ethical standpoint, permissions of capturing said faces and what have you was, you know, something that we wanted to be very cognizant of in terms of the adoption of AI and specifically ethical use of AI, what are organizations looking towards or what are they starting to gleam in terms of the adoption? I can take this one, actually. I think as Venky was saying, there's there's really kind of multiple steps that we need to think about and that an organization needs to think about. And so one of the first steps is at the beginning when we want to think about the impact of the technology. Who is it going to work really well for? What are failure modes that we would want to avoid? And we want to do an exercise where we really understand all of the potential impact so that we can design the technology to achieve all the benefits that we want, but avoid those those cases that would be very, you know, we'd all as creators be very unhappy to see them happen. And so the first step is really thinking about people, thinking about this impact. The next step is thinking about how we develop the technology. We have to develop it responsibly. If I'm the one building the model, I need to make sure that I have that diverse training data and that the model actually works well in different scenarios and works well for different groups of people. And so part of it is the way you actually build it. There's a bunch of considerations you need to take in account during your development. And then as Vinky was saying, it's not it's not enough just that we build it responsibly, we have to empower people to use it responsibly. And so we need to use tools like transparency notes and deployment guidelines and documentation to help people better understand how to take this very exciting, powerful technology and use it successfully in their context. And so we're also looking at how to do technological innovations and pieces to enable that responsible use. And so we really have to think about the entire flow. And I think that's where all organizations are going to need to to move towards is really thinking about each of these phases when they're they're looking to either develop AI or or adopt it and use it ethically. So, Sarah, having a lot of professionals in the audience today, they're often seen as the enablers of and user access to this data. How does the data fit in the responsible AI story? Data is a huge part, right? Data is it's kind of the lifeblood of AI. It's how we we power everything. And it's very hard for a machine learning model to to learn something that isn't in the data. And so it's a key part that we found right in practice that you have to think very, very deeply about. And it's not it's not just about getting as much data as possible. It's really about making sure you have the right data that you're intentional about the the data that you're using to learn from it. And you think about how this data was created. The topic I already touched on before, which is does the data have the diversity that we need in it so the model can learn the right things? Is it really the right data for this problem? Or is it that it was data collected in a different situation and it may not really reflect what we want to learn in this space? And so a key piece is that we want to be really thoughtful and intentional and have a huge understanding of the data that we're we're feeding into our system for it to learn from because that's going to have a huge effect on what comes out the other side. And to be honest, it's it's more than just the data. As we're building this system, we really need to make sure we have the right information as part of our design and part of our process. And so I would love to hear Josh talk more about how we really get that the right information. Yeah, for sure. It's I mean, one thing we probably all can agree on is like what matters the most at the end of the day is whether something actually works for people and better yet that it delights them when they interact with it. And if it doesn't, like why not? The reality is there's no way to know perfectly if something's going to work for people, but the best step forward is to try to walk a mile in their shoes. Who are the people that are actually going to use the stuff that we're trying to build and what contexts and what kinds of jobs are they trying to get done? So that's kind of the heart of what user experience is UX research and design. We try to connect the dots between what a technology is capable of doing and the ways that people, we think, will benefit from using those technologies and those capabilities. One example that comes to mind, it's I would say not as sensitive as the facial recognition one with kids that you brought up, but it's sort of in a similar vein of that technology and about how generalized use sometimes kind of leads us down quirky pathways. So I worked in this product that was trying to add a functionality into camera, like into a camera until like photo taking and the goal for user need was like taking better candid photos of familiar faces of people. The initial version was throwing us some some weird unexpected results. It just wasn't getting always consistently great shots of people. And so we actually built kind of these debugging these custom debugging tools that help to simulate. We could turn certain models on and certain models off. And when we dug into this one classifier, the person classifier, what we noticed was it was getting really interested. We use words like activation in AI, right? So it was getting activated by hair length as a proxy for women. So even though we were trying to look for familiar faces, candid photos, we were actually getting a lot of shots of men's faces who typically tend to have shorter hair, but women who, on average, have longer hair, we were getting shots of the side or back of their head. And so we actually had to do was specifically ignore that model and then double down on our data collection efforts, just like Sarah was saying, where we actually we needed to specify equal representation across both gender identities. But more importantly, because the goal of the app was to try to take photos of faces, we needed to make sure that always in every piece of training data, there was a shot of somebody's face well framed. So from an IT pros perspective, Venki, how do they fit in the whole process of responsible AI? They've got a critical role. I think, you know, and I think about the work that we did in sort of getting AI sort of developed and deployed at Microsoft. There are so many questions around responsible sort of development and use. It really is sort of impressive the amount of skills you need and how, given how new the whole field is, you know, the realization that almost no one knows all the answers and everyone knows parts of it and they have clear points of view, but sort of the there's no process, there's no policies yet in place to just simply answer the question easily. And so what we've done and I think what really my guidance is sort of get all the experts together, so there's experts in technology, there's experts in policy, in legal, in design, bring them together and sort of really sort of have a safe space to talk through what what are you trying to accomplish? And again, as Josh and Sarah mentioned, like, what is the problem you're trying to solve and then how does AI fit in and then what are the issues that you want to come up with? And, you know, on our personal experience, we have sort of this ship room where we go through these questions and it's just impressive how many different perspectives there are. They're all right, but they're all sort of different parts of the story. And so really want to synthesize and IT Pro is a really great role to sort of like facilitate this synthesis. So you can actually say, like, look, this is what we're going to do. And here's why we're going to do it. And that is it's not perfect. As Josh mentioned, it's not going to be 100 percent right, but we feel like we understand the problem enough. And then second, you know, we have the systems in place to learn from it. And so I think the learning again is a critical part that's going to come up over and over again. Yeah. And, you know, and it's even bigger than just pulling the groups together as Venki saying, we've had to also build a team that has all of these expertise in house. So we've brought together design, user research, people who can think deeply about privacy and security, how we handle data as Sarah is mentioning. And even people with deep background and expertise in tech ethics philosophy. And so we've kind of brought these groups together to create a multidisciplinary team. And we use our team to go and augment our engineering teams because we sometimes need to bring in those experts and just kind of integrate directly with those teams. And the important thing to realize is that you can't hire everyone. And so you have to have practices to enable more voices to be heard. And whether it's bringing in experts or even end users and stakeholders who are impacted by what you're building, you need to find ways for them to participate and actively co-create. And recently we released a practice that we call Community Jury. And this is a way for stakeholders to hear directly from product teams and even co-create solutions to challenging problems. So as IT pros, I think it's important for you to start thinking about how you convene and bring together groups where you actually insist on having groups being fully created to help empower teams. But then also finding mechanisms to bring in these additional voices that you don't really hire for but that are part of building product together. So with responsible AI, how do you make trade-offs? Banky, as a senior product leader, how do you balance between responsible AI with AI investment and development? And how do you decide how to fit these things together? Oh, my God, this is the best question because I think the heart of this enterprise is to figure out how to do the trade-offs because I've seen opinions on, oh, my God, I have the coolest technology, coolest data science that I've created that can do this thing. And you're like, really, you don't you really want to do that? To on the other side, look, all this AI is unsafe because it's probabilistic and it's probably going to get wrong and we can't, we really should. We should even ship issue and make this product. I think you sort of see this entire spectrum. And so the, you know, as my role is sort of really to figure out, what is the right sort of, what is the right sort of path to get to? And so the principle that we have used in our team is to really maximize learning, sort of, you know, very, very early stage of this, of this sort of both an AI adoption and specifically understanding responsible AI development and use. And so the techniques we're mostly using is to figure out, look, there's probably a hundred things that are issues and, you know, there might be 10 that are like, you know, ship stoppers. Really, we cannot ship without solving these problems. Otherwise, you know, you just, it's a phase spam moment. So you really have to sort of mitigate those up front. But then the next thing is there are like, you know, there's still 90 more issues. But the question is how are you going to learn? And so we spend a lot of time thinking about what are the processes and technological ways in which we can sort of learn. So, you know, we've introduced gating for some of our services just so we can better understand what what customers are trying to do with this AI, right? Because otherwise, if you're just an open-ended service, people just open the API and start using it. And we have no idea what they're doing. And that's correct because we don't, we shouldn't see the customer data. But the thing is we want to learn. And so we've introduced this new process, a new step in our system to explicitly ask people to sort of do opt-in, say, use this, use our service. Tell us more about what you're trying to do. Why are you trying to do it? That allows us to learn and that allows us to build policies over time. And so the idea is to sort of gain the learning. And as you do the learning, you sort of mitigate the issues that come from it. And then you start scaling safely. And so the idea is, you know, you start with small, but then you sort of scale safely. And that's the process that you put in place in cognitive services. So speaking of trade-offs, Josh, how do we decide who we are designing for and who we're not deciding for? Yeah, it's a really, really important question. And a really hard one, like Miro was saying, it's not comfortable. You know, it's like not an easy conversation to describe who, if anybody, you're not designing for. You know, I think I haven't met anybody who gets into this industry because they want to exclude people, right? And it's always easier for us to try to describe, you know, how inclusive or global we want our products to be. But just like Venki was saying, like this is a reality to prioritization. We've always been doing this, you know, which features are we going to prioritize? On what timeline for which markets? Using which which testing protocols, you know, and that last one, I mean, drawing from like a non-AI field for a second, like that last one testing protocols, that's actually why, sadly, like cars are less safe for women than they are for men. You know, somebody had to make a call at some point about a production requirement for crash test dummies. They had to figure out what is the body type that will best represent people so we can figure out how to keep them safest. But that call ended up being kind of this default male average body type. And the result is people are more hurt and people are exposed to harm. But AI is different. There's something different about AI. It doesn't have to be so rigid. It gives us this opportunity to learn for it to be teachable, just like thank you was talking about. And that comes with these new types of accountabilities, you know, in the past, we had to limit our logic to the code that we could write, you know, the things that we could actually invent as rules. And with AI, we don't create the rules so much as we show examples, you know, of the outcomes that we're trying to make possible. So unless we've actually stated those goals outright, you know, who will this work well for in what contexts to get what jobs done? We kind of risk continuing on whatever is just the thing that goes without saying because everybody's different in different ways. But certain types of groups end up becoming kind of invisible or continuing to become invisible when it's too uncomfortable to talk about what makes us unique and what makes us distinct in our contexts and our characteristics. And then the stuff that goes without saying just goes without saying because we can't measure it. So this all sounds great. Sarah, can you tell us more about where you see challenges in see challenges, intentions in implementing? Yeah, I think actually, thank you basically mentioned a very significant one earlier, which is how do we balance privacy and, for example, in our case, customer data and customer privacy with our ability to learn and to debug, right? To understand how is someone actually trying to use this? What are the real errors they're seeing? And so we've had to do a lot of actually, I think, you know, great innovation in figuring out how to build those feedback mechanisms so that people can report what they're seeing or they can they can share the information that they want to share. We also see a very similar tension with with privacy and fairness where, you know, we don't want it feels as Josh was saying, uncomfortable to call out specific groups to to have, you know, someone's race, particularly labeled in a data set. But it's very challenging for us to understand and build a test that tells us that it doesn't work well for that group of people. If we don't have access to that data, if we don't have those labels. And so there's a lot of tensions like that that we really have to to navigate. And I do think it's led to a lot of integration. It's not just well, it's only one or the other way that we have to go and design new mechanisms that allow us to really try to get the best of both. But it's every day bringing all of these different perspectives there to figure out how to navigate these these tensions and these trade-offs. And Venki, are there examples where we put this all together? Yeah, thank you. You know, we're just announcing at Ignite, sort of a new addition to our vision product called Spatial Analysis. And that was sort of from the ground up built with response play in mind. And so just to get started, like, you know, Spatial Analysis allows us for the first time for us to use AI on moving images. So until now, our vision service most has been working has been working just on static images. You give it a picture and it sort of tells what's in it, it captions it. But now we're able to actually look at a video feed and actually start making a new analysis and run the eye on it. And so Spatial Analysis, we're able to sort of look at space and sort of see, you know, figure out where people are, how close people are to each other. And this is sort of super timely in the time of COVID because our customers begin to use things like Spatial Analysis to actually understand where people are flowing through the system, to through their buildings, for instance, and then find out where are their choke points and so that people are not socially distancing. And so that's an insight that they can use then later to sort of reconfigure the people movement flows in the building. And so this is sort of really useful, but it opens up all kinds of questions about privacy. And what can you see? And so what's been weighted as in the design, both from the data capture, all the way to consent for people to get into the models, sort of for the data, the models, all the way to actually for the first time, we're actually sort of doing a great on release, responsible use guidelines. And so in our documentation, we have this great sort of principle documentation on when can you use it? What have we done with it? How have we developed it? What is it good for? And then how should you as an IT pro and as a developer use this technology and where you should use it and where you should be worried about? Here are the questions that we want to ask you. So I think, you know, it really has been our first example of end to end sort of from a life cycle perspective. So looking at it responsibly from the very beginning to the very sort of to release. And I think a key part of that has been transparency, right? Making sure that we are sharing the information that we that we know as the model builder. Here's the limitations. We know here's the things we know that would help you get higher accuracy as people who have been thinking about this technology and research that we've done with our customers. Here's what we have learned that, you know, people react to or people want to see. Now, of course, that only goes so far, right? We don't know all of the deployment context and all of the information. And so we it's just a step to empower customers to take their contacts into account and ask the right questions and make the the appropriate decisions for their situation. And so transparency is actually a key theme that we've seen kind of across the board. And so one of the things that we talked about a lot of building and something I'm very passionate about enabling this is tools. And one of the reasons that we have built a lot of our responsible tools in the open source from the beginning is exactly to enable this type of transparency so you can understand exactly what the tool is doing and what it's not doing. This is the fairness analysis. The tool does it's not magic. It doesn't do this other fairness analysis and then also enable new people to come in and add new ideas and contribute and innovate more rapidly. But it's really important that people can really understand how the technology works and how the tool works to help them make the right decision in their context. And so that's is one of our principles but also just a really important approach that we're trying to bring to sort of every way that we build products and tools to enable people like you to go and take the next step and use it responsibly. And you know as Sarah and Venki mentioned it's we're trying to be very transparent about our technology but it also needs to be consumable with people and done in plain language. Are what you'll notice with the documentation around spatial analysis is it's not our typical API type of documentation. You will see all of these perspectives coming in. We're sharing more about the customer scenarios and and the guidance around you know how do people think about these technologies. What are they concerned about. How do you deploy it in accordance to the principles that we care about. Sarah mentioned some of our principles. They include fairness and reliability and safety and privacy and security and inclusiveness transparency and also accountability. And the last thing we want to do is just throw technology out there without any sort of clear guidance or recommendations. And so this is where we're also proactively recommending how you address and disclose information about the system and the way it's working and doing it in a way that people feel informed and empowered to make choices to opt in or not. And so what you'll see in our documentation is just a lot of the deeper research that we've been doing as well and and using that to really empower it pros to deploy things in a really responsible way. And we want you to have all that information. We don't want you to be in the dark and trying to figure out how to make choices. We want you to have all the knowledge that we have as well. We're trying to be very transparent and and humble about it at the same time. So I think the one thing that I would say it has been a mindset shift for us in terms of our API is generally, you know, we always want to as developers and everything else want to say like anyone can use it for anything. And and you know, we are really getting to the point where like look with AI, you just can't say like, well, it's just an API. I just call it, you know, there's an output. There's a lot more infrastructure and sort of sort of helping me to put in to sort of make sure we support the right use of it. And so I think it has been quite a change for us and it will be changed for all of you as well as you implemented to think about all these extra things that we will see. And so, you know, we've been struggling with like, is it a tax? Is it not a tax? Actually, well, it is new work, but it's super important work actually that sort of helps us long term sort of really get adoption and people feeling safe with the technologies. I think it's been an important thing for us to learn and I think the important recognize that it's not just easy. It is work. And so I think really have to go into that learning mindset and lots of cool stuff that will happen with it. So so awesome that, you know, the efforts that you're putting forth to enable this, I wanted to ask of the panel, how long before it becomes just normal that this is a consideration as opposed to just adding on AI as in terms of technology. And it's just part of the design and build process that you're incorporating ethical use. That's a really good question. How long before it's normal? We're going to push as hard as we can to make it normal as quickly as possible. And so this is where we are very proactive about talking about responsible AI and all of the different types of disciplines and expertise you need to bring into the thinking. And so, you know, with any big shift, even if you look back to the way people, you know, shifted with security and privacy and accessibility, there is an early phase where, you know, we're all still figuring things out and we're trying to learn and understand. And then there's a part where you have enough critical mass where a lot of people are saying, OK, this is the way to do it. And so what we're trying to do is learn as quickly as possible and share that learning as quickly so that the industry is moving and shifting as rapidly as possible because this is, you know, a moment we're talking about AI, but it is a bigger question. That's a larger question about innovation all up and not just about AI, but about how we think about technology more holistically. And we're using this moment to facilitate that larger conversation. And I think as someone who's been working on this for a long time, I thought, you know, with any of these technologies, it takes a long time to make this kind of transformation. But it's actually been amazing to see how quickly people have adopted and are just hungry for more. It's like once you see this way of thinking, you can't unsee it. And so our biggest challenge now is really building up those practices and the toolkits and really solving the hard problems so that all the people have a way to actually implement this and implement it at scale in all of these different contexts. And so it's been just incredibly encouraging how quickly people have, you know, really adapted to thinking this way. But we still have a really long way to go in terms of the innovation to enable this to just work in many different settings. Yeah, I can sort of at the total risk of dating myself. Like, you know, in 2001, we had this sort of big moment with Microsoft around security. And I remember I was in a, you know, I was self-programmed manager there and we, all these questions came up and we were just flummoxed. We were like, oh my God, how is anyone supposed to never ship anything with so many requirements and so many unknown questions? And you know what, it took time, but, you know, we'd learn, we'd build the tools and, you know, we created a secure development lifecycle and now it just becomes how we do stuff. And it turns out not just it was good for security, it was just made better product. And I think that's to me the biggest thing for me is when we invest in this thing, it's not just that we are sort of solving some trust issues. We're actually just building better product and we just overall, the thing is getting better because these are just the right questions to have. So I think, you know, for all of us as the industry, we're going to be working through it. But, you know, I expect, you know, we will innovate as usual and we'll sort of build tools and processes and policies and we'll sort of have this will all become sort of normal soon enough, as soon as possible, I guess. So I want to thank everybody for being on the panel today. We took the time to extend our session that we had to ignite to a full panel. I.T. professionals are now eager to learn more so that we can make ethical use of AI more normal. Where do we go? Thank you. We have an AKMS link for that. So it's called a K.A.M.S. slash R.A.I. for Responsible AI, R.A.I. resource. I want to be a YouTube guy and say it's over here over there. It's down to me. Hello. Well, that's a great first place to start in terms of your journey for understanding ethical use of AI. There's I've gone through it a little bit already. There's great tutorials and there's great logic. And, you know, a lot of conversations that happen within an organization from an I.T.s perspective. I heard a great quote yesterday. If I'm the equivalent of a mechanic with grease under my nails, how do I go then to the well-suited executive to explain why ethical AI is needed as best practice inside of an organization? And the content and the resources that are provided there are that talk track that can empower an I.T. professional to have that open dialogue with the executive or the business decision-maker we say at the organizations to have that relevant dialogue with them in terms of why proper ethical use of AI is so important and is inclusive of the world that we live in right now. So thank you all for joining us today on the panel. And we're going to kick back to Pierre. There we go. Mr. Roman, oh, I can't hear you. Yeah, this was I was muted myself and I actually sat and watched. It made me realize there's so many things in AI that I take for granted. I never realized that the potential bias that's like built into the models. So this was very informative and eye-opening. And how amazing, you know, specifically based on the I.T. professional's opportunity to build out this from the foundation to incorporate ethical use of AI and organizations as opposed to just tacking AI onto everything and not really being mindful of how we interact with people every day. That's right.