 I bet you're using artificial intelligence on a daily basis. We all are. But have you ever thought how you as a service design professional can contribute to the development of these AI technologies? Well, there's a lot you can do and that's what you'll learn in this video. Here's the guest for this episode. Let the show begin. Hi, I'm Julian Powers. This is the service design show, episode 156. Hi, my name is Marc Fontijn and welcome back to another episode of the service design show. On this show, we explore what's beneath the surface of service design, what are those hidden and invisible things that make all the difference between success and failure, all to help you design great services that have a positive impact on people, business and our planet. Our guest in this episode is Jillian Powers. She describes herself as a passionate evangelist for ethical and humane technology, data-driven operations and digital experiences. She's currently the global head of responsible AI at Cognizant. The reason I'm excited to have Jillian on the show today is because AI is already all around us. Whether you're using text prediction in your favorite email app or applying filters on your photos, you're already using some form of AI. These AI applications have been designed with a lot of care to make sure that they do no harm, hopefully. Now, text prediction and applying filters to your photos is nice, but things really start to change when AI is used to make high-stake judgment calls. Will you get that job? Will you get that medical treatment? Will you get that insurance? And mind you, these are already scenarios where AI is used today. According to Jillian, service design professionals need to get involved in the development process of AI applications sooner than later, because when that development is driven by technology as it is today, rather than human needs, things can go wrong. Really wrong. So if you stick around until the end of the conversation, you'll know what is this AI thing anyway? How can you contribute to the development process even when you're not working for a big tech company? And finally, how to make a convincing argument that gets you in the right conversations? If you enjoy conversations like this about topics that are on the forefront of service design and that help you grow as a professional, make sure you subscribe to the channel because we bring a new video like this every week or so. That about wraps it up for the introduction. And now it's time to sit back, relax and enjoy the conversation with Jillian Powers. Welcome to the show, Jillian. Hello, Mark. Nice to see you. Nice to see you as well. We're going to continue on a topic that sort of has been introduced quite recently by a good friend of you, I think, Carly Burton. Yeah. So for the people who haven't seen that episode, yeah, it's going to be about AI. And if you haven't seen that episode, it's, I think, 152. Maybe we'll get into your connection with Carly in a second. But for the people who don't know who you are and haven't looked you up on LinkedIn yet, could you give us a brief introduction? Sure. So Jillian Powers, nice to see everyone. I am the responsible AI head for cognizant technologies. So what that means is I work internally as well as for our clients to make sure that the artificial intelligence that we build and we support is fair, robust, transparent and ethical. Wow. Already so much to talk about. And now, I don't think I asked you, but what is your connection with Carly, just out of curiosity? We just know each other through the grapevine and through our experiences working in this industry and working on services and being at that sort of intersection of product and math and outcome. I think those are topics that I haven't heard a lot on the show and the listeners need her math product maybe, but this is going to be fun. Jillian, a tradition here on the service design show is to do a lightning round. I've prepared five questions for you. Your goal is to answer them as briefly as possible. So we'll get to know you a little bit better as a person next to the professional Jillian. Are you ready for the five questions? Let's do it. All right. What's always in your fridge? What is always in my fridge? Vegetables and cheese. What's your favorite holiday destination? My favorite holiday destination is anywhere where there's a beach. All right. What was your first job? My first job was a postdoc at Washington University in St. Louis in American culture studies. Well, I guess that was my first academic job. My first job job was a camp counselor at the age of 13. All right. Which books are books I reading at this moment, have any? That is a really great question. I'm reading, where is it? I put it over there. Hold on. It's like it was right next to me for the longest time. I'm reading Queer Failure by Halberstam right now. Okay. We'll add a link in the show notes. And the final question, fifth one, you're doing awesome. Do you recall the very first time you heard about service design? I did. I definitely remember the first time I heard about service design. I was working for Ideacator and we were doing a lot more service design work. So I come from an insights background where I do mostly qualitative research for business development, for technology design. And we started doing a lot more service design projects. And it just, my mind opened up and my eyes opened up and it was the most exciting thing. Cool. Yeah. We've had Idris Moody on the show as well a long time ago. I think it's one of the first 50 episodes founder of Ideacator. So cool. Now let's jump into the topic of today, Jillian. I'm so excited to explore this because I recently put out something on LinkedIn about AI, machine learning. I really feel that we're on the verge of a revolution within design space. I feel that AI is going to help to aid in the design process in a way that it's never done before. I see that AI is being framed a lot about in conversations around how it can help in the final service or how services are going to be AI driven. But I'm really excited about the potential where it actually is going to aid in the design process. I'm not sure if we're going to talk about that today, but you at least got my mind sort of inspired and it's firing on all cylinders of where this can go. Now, this was a long introduction, but we're going to address two topics today or sort of merge them. You're deeply into the topic of ethical AI. And the other one that you mentioned is let's demystify data science. Am I correct? Yeah, let's talk about it. We'll have that conversation. Both topics are really interesting. For me, data science, it feels like something that I see all over, but in my bubble, it's sort of I haven't taken the chance to really deep dive into it. I'm curious, how did you end up from your background as an anthropologist, right? Researcher, sociologist, correct? How did you end up in this world of data scientists, engineers? Can you share that with us? Definitely. So I am a qualitative sociologist. I work with small sample sizes. I do interviews. I do participant observation. I do ethnography. I do design thinking methods. That's my bread and butter. But when I was in grad school for my PhD, I went to a very quantitative department where what they did mostly was statistics and survey research and demography. So I spent a lot of my time with math people who sometimes disparagingly said, oh, that's really nice. You tell cute little stories, jokes on them, cute little stories are how the world works. Like cute little stories are how people understand other people's experiences. So I had this background in really understanding what the quantification of information really looked like from the side of it. And because I was in a quantitative department, I always wanted to do something that fit within my advisor's work. But it was never the type of things that were exciting to me, the types of questions that were asked, and the type of data that we had access to. We always had to use proxies for types of information, because we didn't have access to what I wanted. So I was like, you know what, I'm just going to go spend time with people. That's how I'm going to get my information. And so that's really the beginning of all of how all this happened. And then as I started working in UX, as I started working in insights research, I just saw how important numbers were for things. More than one time in my career in business, I've had people say, Jillian, this is a highly persuasive story, but you need to throw some numbers behind it. They don't need to be real, but you need to throw some numbers behind it. And that just blew my mind that we have this idea that math and numbers are the abject capital T truth. And so what I really work on is how do we just demystify that? What do those numbers actually mean? And how do we use the right ones for the right work that we're doing? So if we're not actually thinking about the service that we are creating an artificial intelligence application for, if we are not thinking about all of the users and how they interact with things, then what are we actually doing here, right? Like we forget a giant part of this. So to demystify data science, it really is to take it back down to the level. What are we actually trying to accomplish, right? The data science is a tool. It is a way to accomplish something. So we don't really need to get into the weeds of the math. We don't really need to know all of the sort of intricacies and the details. And honestly, like when it comes to artificial intelligence, we have all these black box models, we can't, data sets are also too large, we can't. So what do we do with what we have available to us? And that's really what I work on. And what kind of questions are you thinking about these days? So what's keeping you awake this time of year? There are a lot of things that are keeping me awake. Well, what I love to see about this industry and the space right now are things are actually finally moving. So we are seeing legislation. We are seeing frameworks develop. We are seeing people go from principles to practice. So instead of just saying we would like to make sure that our AI is unbiased and transparent, it's like, okay, well, what does that actually mean? How do you make your AI unbiased? How do you make it transparent? And how do you also protect enterprise level secrets and proprietary information at the same time? So the types of questions that are keeping me up at night are, what are your accuracy levels? And do you have multi objective outcomes so that we make sure that we can protect people at the same time? Hmm. So I feel that we need to go back a few steps. And first of all, we quite loosely use the concept of AI. Let's give it some substance. What do you see when you talk about AI? What do you mean? Okay, this is a great question. I start every conversation this way. I am not talking about robots with brains or human robot brains or robot human brains. We are not talking about sentient computers. This is not the conversation we're having. Well, Google had that conversation about, is this artificial intelligence sentience? The answer is not. It's a really good pattern matcher, right? That's really what this is. It has so much data. So what artificial intelligence is, it's complex math built on really large data sets to give you the probability of a correct response. Sometimes that's easier and we sort of, we just let it fly by us. We don't even see that it's happening in our lives, for example, the predictive text filler in your email. The light, the automatic light that takes a picture of your license plate when you're driving and you're speeding through a red light. Those are artificial intelligence, right? That's an automated process using big complex data to get an answer. So it's probability scores. It's these large language models that can sound like they're sort of human. It's picture matching, right? There's a lot of computer vision in artificial intelligence. So it really is, okay, here's a picture. We've shown you a thousand other pictures of a horse. Is this a horse? It's things like that. And I'm happy that you sort of shared this because I think one of the first steps is to really understand and unpack this term AI. What does it mean? What can we do with it? Where is it applied? And like you said, it's already like very prevalent in our lives in a lot of areas that we just haven't yet maybe labeled or identified, which in a sense is a good thing, I guess, when technology becomes transparent, it gets adopted. So that's good. But we have to understand how that and I like that you sort of made it more tangible by saying it's object recognition or it's text generation or it's it's things like that. It's more concrete than like the broad concept of AI. Robot brains, exactly, right? And I feel like sometimes that conversation gets in the way of the harm conversation too. Because if we're sitting there thinking if Google created a sentient chat bot, we are not then having the conversation of well how and what is the environmental cost of that model? Where is that model going to be used? What are the biases within that model? All of those conversations then get thrown to the side because we're having a conversation of does this thing have feeling or not, which it doesn't. It is pattern matching, right? It has a giant database of information. So based on all of the things it's seen before, it can figure out how to respond to you in a way that makes it sound like it's human. Yeah, or it can recognize images or it can generate text and like I'm sure you've had enough of this but the most common example I guess is the chatbots or the assistants that you interact with that where you sort of send in a message and then you get a reply and it should feel like a conversation like that's the kind of things that are enabled through AI bottles. Yeah, right. And this is where the service design parts have come in too because let's say we now want to take one of those chatbots or those large language models and we want to apply it to a specific area. Let's say healthcare. Well, what if the data that this chatbot or this model was trained on doesn't have let's say women's health information attached to it. We might then get really poor examples and poor outcomes in that way. So as we're thinking about how to apply these types of artificial intelligence tools when it either computer vision right where it understands pictures any sort of biometric identification or if we're using these large language models or we're using advanced analytics what is the application we want to put it towards and does it have the value and does it know enough of the information for that specific use case then provide really good answers. I feel that we're going to make this in sort of AI one-on-one episode which is great because you mentioned something that is quite important how the model is trained. I sort of am starting to scratch the surface to understand what that actually means but can you again sort of exemplify what is that? Okay, so this is the other part of artificial intelligence that I think doesn't get that gets short shrift in the world right that while we think of it as automation that we're making robots and computers do all this work for us what that really does mean is that there's a whole bunch of people on the back end doing different types of work now that they weren't doing before so let's say we take that large language model and then we now want to apply it to your specific business use case. Well we have to train that model now we have to tell we have to tell this model no you got that one wrong you should try again oh you got that one wrong you should try again so every time you do that you're adding another layer to the model which will hopefully then increase its accuracy so what you're what training really means is we take one of these models that is basically like a complex computational outcome right it's math it's computer math but then what we say is okay you did that thing wrong do it again this way oh you did that thing wrong do it again this way I'll give a great example of why this matters let's take the COVID-19 pandemic data was changing every two weeks right every two weeks things were going we're changing when it came to COVID if we had a health model that was based off of four-month-old data we were not going to understand how the virus was spreading we have to train that model based on new data very quickly to be able to then understand where things were going right so this is also why AI is also not static because we have to bring in new information to make sure that we could we could maintain the accuracy rates that we're looking for or at least be aware how like you said which data served as input for the predictions that you're getting out of it right and like again it's I'm trying to translate this in layman terms but like the first stage is in training these models is when you serve a lot of photos and you want to identify cats you sort of have to help the model to understand okay this is an image with a cat you do that like a lot of times at the start and then at some point hopefully the next time you put in an image it will tell you if this is a cat or not the reason why I'm saying this is the data that you put in and sort of the yeah the training that goes into it and the response that you want to get out of it that's super important like that's that's key to the quality of the outcome of the model completely right look I'm sure you've heard of the phrase garbage and garbage out that's so important when it comes to artificial intelligence and machine learning so let's take that example that you gave them let's identify a picture of a cat well if every single picture that we put into that program has a cat indoors they want they might not that the the machine might not be able to tell a picture of a cat outside because they've associated indoors with cats right so it's not a human brain right it is a computer trying to make predictions and connections we don't know how that works it doesn't have the same sort of logic and context doesn't live in the world like we do it didn't go to kindergarten or get raised by parents right it was not explained how like ethics or decision making works right um all all of those sort of unseen cues that we get because we live in the culture and we live in the world computers don't get that's they make silly like responses sometimes and uh I'm getting excited about this conversation because you mentioned culture like that seems that's uh it seems to be me to be to very hard to embed in a model like that or it's implicitly embedded which is maybe even more dangerous this is what happens right because all information is not um like all information has a little bit tinge of subjectivity to it right we this idea of abject truth right like we numbers do matter and they are they are solid but we shape the answers that we get right and so it might not be intentional but our biases do come through in this so the best example I can give here is let's say we're using we're training a model to identify content on the internet that we don't want minors to see but let's say that that training now is being outsourced to a other country that has a different sort of cultural understanding of what is appropriate and not appropriate then all of a sudden now we're going to label homosexual content as not appropriate for children right that happens so we have to be really careful about who does our labeling who does our training how are we structuring our information and then what what is the outcome that we're driving towards so there's there's so much um so much areas where things can go wrong or at least where you need to be very aware of what's happening like once uh once a model is there uh and you input something out of it I can also see that you have to be um like the outcomes that you get don't take them as truth that I see I can imagine that that's also like one of the big it falls well that's the biggest problem too so there's two sort of problems here that I work to uh to really help people get around number one we all had math education growing up and we all have sometimes a little bit of blocks around understanding math right like it was a little triggering we have some trauma responses around it so math overwhelms us the other part of it is when we see a number we assume that it's truthiness right we assume that it's true so let's say I am some sort of medical professional and I'm a tech and I see a number well it might not actually resonate with the experience in the room but I'm like well the computer says it's there so therefore it must be true right so you're totally right like I think the first thing I always ask people to do is every time you see a number how is it being constructed what doesn't mean how is it being created where does that information come from we have to start questioning some of the data that we get and uh that's in general a good practice when you see numbers even when they are not generated by a computer but also by your colleagues probably uh and uh I that that's sort of um notion that uh computers are subjective I guess we uh to a certain extent like code is law like those kind of things uh that feels like yeah it's it's coming out of the computer it must be true and then I guess in some cases it it follows the standard process and it's always true but especially in these um opaque uh amorphous predictive models it can go anywhere well for the most part it can go anywhere but it goes it but it goes where it goes based on the rules that it's been learning um and it's it's it's up to the humans who are involved in all of those processes to understand what it does how it works and what the harms might be right like that we need to understand where this might fail and it doesn't mean that you don't release something right like I've had this conversation before um what is the cutoff and what is the threshold for releasing something when we know that it has certain types of failures it just depends on what those failures are and how you plan on addressing them so um another analogy and let me know if this makes sense or if it uh completely doesn't like uh if we would take a judge in um in a jurisdictional system we trust that person because they went to law school they know the law they know how to they have certain norms and values when they make a judgment we trust that it's the right thing to do so we've outsourced that judge making to them um we we sort of do the same with these uh models that are trained on a set of data without actually uh sort of having that line of credibility of this model went to law school and it adheres to a certain standard does that make sense so one of my mentors made this comment and I still use this all the time so when is it unethical not to use AI you bring up a really great point right so we know that there's some actions that people don't do the best at because we do have biases let's say hiring right yet we are really bad at being able to understand that when we say this is a good fit what really saying is this person is a lot like me right so when is it unethical for us not to use an AI right like an artificial intelligence or a computer and a data-generated outcome can be less biased than our human responses right there are some problems that I as one person cannot look at that much data and keep it all in my head a computer can help me with that so there's a lot of opportunities to think of when is it unethical for me not to use this technology to help get me a better answer hmm I like that I like the way of thinking so we haven't really touched upon how service design plays our role in this I think you definitely see a role for service design and service design community I'm geeking out just on the on the data stuff and the modeling stuff but can you take us through like how do you see service design interacting with this subject there's two ways the one is how do we even understand and present what math and outcomes look like to people so what is the experience and the service and the design of the actual application and then the other one is the use and the utility of the types of models that we have to begin on so the service design can come in at the point of any sort of data pipeline right if we're thinking about an enterprise and how they use information and how they're trying to create outcomes and use cases well service design has to be in the room right because I find that the technology and the data science are leading so people are like oh my god this is so cool we have this large language model we should just build 75 million chat bots well nobody wants more chat bots like I hate chat bots right like what what is the use of that right what is the service that we need to design and then if we think of it in that framework right that user focus then we understand who we're designing for what needs to be there and how to use the technology it doesn't become technology led it becomes like challenge led and outcome driven so and this I think something that we've heard more often designers and service design professionals are capable of asking other types of questions that aren't being asked enough right now in these in the developments of these technologies services solutions completely yeah I think service designers have such a role to play in the explainability space of artificial intelligence as well because so what I see because math as I said already is confusing a lot of people don't get it it's hard to make this is really complex computer math and the way that we explain these models it's like well here's more math to show you how the math works it's again not accessible for most people a service designer can come in there and be like okay well what are you trying to show with these types of outcomes how do we present this to a business user how do we present this to different types of people right so when we're building master data management profiles programs in enterprises if we're figuring out how data flows through an organization in ways that protect privacy but allow people to develop AI within their their business units how does that work how do we understand the information that we're getting I think service sign is a great role there again because it brings in the user perspective right that's so I'm curious like is there a you already shared many stories but do you have an example where sort of the lack of the focus on the user led to things going wrong and maybe an example where they did work out or where it worked out better I think there's so many right I think that's the challenge there's so many examples but I feel like it depends on the on the space that you want to go right so there's there's organizations that take a private a private a privacy first framework when it comes to data capture and sharing information and also presenting that back to users right so I think that's a wonderful example of how we can have a user-centered understanding right but if we take all of these examples of these prediction algorithms we were so excited by the technology that we forget that there are people on the other end of it and that sometimes the data might be flawed so that it impacts our ability to get good answers so for example like there was a recent algorithm used to identify risk in social services delivery what happened there is mostly minorities immigrants and single mothers were thrown off of the roles that led to poverty suicide and really bad outcomes right and so if we have a service designer there you could think about okay how is this our how is this model replacing the human experience who is being replaced here how are they using it are what are the what are the human in the loop moments that to make sure that we can protect human beings in this process right so service design really allows us to make sure that we keep the user but also every single human node in the process in our mind so that we can make sure that we are being as unbiased transparent as we can two questions that arise from this like do you feel that businesses are aware of this and if they are like what's keeping them from bringing service designers into the room and the second question is like from the service design community where are we why why I don't know where are we I think it's just we're not there yet I think it's just really new still and I feel like data science is bringing the conversation forward right and again it's because the data science seems so complex and overwhelming and Matthew that there it's dominating the conversations of the utility in the application of things but I think it's going to have to come because if we're having these conversations about explainable AI well how do we make it explainable explainable for who that's a design right that requires us to think about explainable AI as a service right what does that actually then mean for the people who are looking at things and how do we communicate that you mentioned the ability to explain this do we need to explain this because the reason I'm asking is if we look at something transparent as a predictive text in your email program like it doesn't seem like that needs a lot of explaining where's the explaining part so the explaining is more of the blackbox models the one the the the the models in the applications that really have an impact and a high risk right so a chatbot might not have a very high risk your predictive text might not have a very high risk but an algorithm that decides whether you get cancer treatment or not that might be a little bit high risk right an algorithm that decides whether you go to the the university that you're interested in that's might be a little bit of a high risk the job that you want what productivity means and exiting and entering a country right any sort of biometric verification around borders those are all high risk applications and so those are the places where you would want to have well how does this model work and it doesn't necessarily mean well the math does this it's here are here's where it sort of hits like here are the accuracy rates here's where it starts failing and why I I'm learning so much here that in many conversations that I hear because I'm not in your bubble around AI are maybe frivolous or playful or like you said don't do a lot of harm but when these things enter life and death situations and they are there I guess it's already in place we should be maybe not worried but at least involved and interested to to make them better or to make them as the best we can and so there's two flames of that right so there's one which is like the the the regulatory language is coming right now right so let's say we have that predictive algorithm for some sort of healthcare outcome well regulation is coming that's going to say well here's the data that this is built on here is the application here here is where it lives blah blah blah there's all of this information that you're going to have to collect for regulators but then the part of it is what you need to demonstrate to a regulatory body is going to be a very very different information than what you might need to present for a doctor to use this or for a patient to believe this right and so again that requires us to think about how information is shared with these different types of populations yeah so a doctor they might they might need a reading manual or user manual understand the limitations or the biases in the system or like how to interpret the data because there are still some interpretation even though you get a number like 0.06 what like that means something and somebody still has to make a decision based on that yeah and sometimes it's hard right because it again like this is there's a lot of like sort of computational pieces about this that involve either black box models or or even just sort of highly complicated statistical models and we might be using them together right like one what like this is where I think the service design comes into it like if we are building a service from end to end you might be using more than one AI application in the delivery of that and how those two things then work together is going to be important as well right so you might use a you might use one piece of AI ML to do like document or image scanning or recognition of of sort of categories and tags you might use another one to create a chatbot right there's other there's a lot of pieces of that will be at play here and all of them you have to understand together for that one sort of journey or workflow one of the things I posted in all LinkedIn where I think these technologies can aid in the actual design process is was an example where I thought well we talk a lot about user personas or customer avatar stuff like that and I was thinking like could we use text generation to actually write a rich story to sort of immerse ourselves in this person I think we can and this is definitely going to come an aid in the design process but while now talking to you I realized that it's going to be very important to understand like who is writing that story based on which information because like you said it it might be very one-sided colored the opposite of inclusive the opposite of diversity like that things I that haven't crossed my mind yet but yeah any comments on that yeah completely right I but I as you're talking it makes me think too then that the if we use an AI and ML to build personas one number one totally doable not outside the realm of possibility right now if we had a whole bunch of data about let's say the types of assets and artifacts that insights researchers collect to then build personas like your field notes like your recordings and then we had a whole bunch of personas that we could show a computer this is what we're trying to generate they could totally do that it's 100 percent doable does it lose a little bit of the sort of creative part of it like what I love about building personas is I call it like I build like sort of like user and like like user fan fiction I build these composites of people and I try to bring them to life with all of these little details across a whole bunch of people but you can get some of that from a computer but again if the data that goes into it is biased then the outcomes that you get are going to be biased and this I'm sort of going to make a contradiction with myself and agree with the other point you mentioned is and as a researcher you're biased as well so maybe maybe like your AI model would even do a better job of making sure that your personas are more inclusive more diverse more so coming back to your question is it actually an ethical not to use AI in the design process that's yeah I feel like at personas possibly so the place where I think it's important so I'll let's go back to this healthcare model because I feel like the places where we see the most bias are in healthcare data financial data and any sort and any of these like opportunities spaces right so anything like things like access to college and education access to jobs it's because the data that we have access to is biased so for example we were we we build all of these credit card models based on data from like the 70s and 80s when women were not either in the workforce they're at the same rate or we're not having having access to credit at the same rate so that means we're not going to give credit to women at the same rate because of our models are biased right our large language models are built off of reddit data which is a cesspool of information right so they're going to go that that means it's very easy to get them to become racist so we have to think about that then when we build so let's say we want to as a service designer build a model that can address our biases we have to understand what those are so that the data sets that we then provide are at our model are our machine are more robust than we give them credit for and even in that process we can make things more biased so for example we just we have a data set for kids who are going to go to college we have only about 10 to 15 percent minority representation in that data set so you know what how about we just double it we'll just take those numbers and we'll just double it now we have we have we have more representation however though we've now reified the challenges within that because that that information itself might have been problematic and not representative so you have to be really careful about then how we build these data sets to understand how we how we could then even in our processes of trying to do better we can still add more biases the to the table um like it feels like uh that uh ball of ball of wool that's getting like the spaghetti it's like when you pull at the one end something on the other end moves uh and uh it's um I guess it's uh it's not it's not linear maybe that's what I'm trying to get at you sort of have to take everything into context at once maybe that's one of the reasons why designers are also sort of good with this we like complexity we like complexity and we sort of uh yeah enjoy enjoy that challenge um the other thing uh we sort of a few uh didn't yet get to is um so where are we as in why aren't we yet part of the conversation or maybe not to the rate we could be um and to add to this question uh like what do we what do we need to learn to um gives ourselves give ourselves uh the right to what is the right right of right of passage to to enter in these conversations to to be more grounded I just I think it's just jump in right because I that's I think one of the biggest challenges that AI and ML they feel like this mystery space that is so overwhelming um and the it's not for us or first or for service designers but all of if all of these models are in application if all of these models want to go live in the world right business units and business buyers and and and big executives want to put this out there they need to think about a service designer because how this works is totally based on how it engages in the real world so if we are basing our data off of information that let's say you get from the doctor's office and a service designer discovers that the doctor doesn't collect this information maybe the front desk does and the person's not involved they're just doing it based on their own ideas it helps us understand what this information actually says and means and how it's used and that's not part of the conversation yet right so service designers should be there there's a need for it because how information comes in how it is then applied and then what it means on the back end right but that whole sort of end to end understanding of the application is so valuable and uh you mentioned we should just jump in easier said than done maybe like if it's that easy where what's holding us back um I don't think anything so I think what's holding us back right now is the the challenge of being invited into the room right that that the idea of the human centered end-to-end experience has not really uh we're not there yet but we should be and we have to get there because as these models get used in doctor's offices as these models get used to identify people for for loan repayments as these models get used for marketing right to even create cultural content or to identify populations and do you know personalization and things like that we service designers need to be in the room and honestly data scientists want this we struggle for this because the challenge also is you have a great model how do you show its value you need to build dashboards you need to build applications and so service design needs to be there right I just actually reply to an email from my colleague um because they were just like well how how where have we built dashboards and I'm like we need this conversation because the challenge of showing the value of a lot of these models when we have new ones is how does it work who has access to it who sees it right how do they understand it so all of that is a service design question and uh I totally can see this I eventually this is just another technology it's pretty cool technology uh but uh it's it's about the use and the unlocking the potential and making sure that it's used in the in the right way and it definitely seems that understanding user needs uh helps there uh what are I'm curious uh you said we should be we should get invited into the conversation like when do you get invited into the conversation like what's the what's trigger when people start reaching out to you that's a really great question so the trigger I get is uh how do we get people to say yes to this data capture which is never the right answer they're never the right question it's like what do we have to do to to get people to say yes so we can collect information on them this way so we could build the training data to make this application and my response is I I can't give you that information because your use case is wrong right so a lot of it is that a lot of it is help us make sense of this so um let's say like legal and compliance really wants us to show x y and z but we can't so how how do I get brought in the room to be like okay we want to be able to bring this model to production but um we're getting all of these roadblocks in our way mostly um from different parts of the organization so how do we get people to say yes or the question is I have this model it is complete trash the accuracy rate is so bad but we have spent millions and millions of dollars now doing this program because we were so excited about the opportunity someone gave us this wonderful presentation and it was like a little bit of snake oil and now we have this four million dollar AI program and it the outcomes are really really bad so how do we retrain how do we retool how do we fix so that's how I get brought into the room I think it's a really good question so I'll give an example so there is a um a bio and a biometric identification program that looks at your face if you are a contract worker um working on legal documents right so every time you jump into the program it looks at your face to see if you have access right it's like to make sure that no one else is looking at these incredibly secure documents well this model is really bad on brown and black faces it's really bad with like black hairstyles which means that if you have a whole bunch of contract workers who are brown and black they're going to now spend way more time just trying to get into your application than doing their job that's a service design problem now it seems that uh and we've seen this more often in the service design space that it's good to start where to find the opportunity where things go wrong and it sounds a bit a little bit like that's also a common scenario where you get invited in like something doesn't work like people um people struggle with something that that they should have fixed like from the start um have you seen is there a list of like moments for entry or challenges with AI where sort of service design professionals can latch on to like if they signal hey my I see that they're working on AI inside my organization I've not been invited to the first conversation where I should have now I just need to monitor when like the data uh gathering will start and then sort of start knocking on doors that will be a super helpful overview it's coming right so this is the challenge right I think you're totally right it's coming though but where does it come from so when it comes to the responsible or ethical AI space right so it's it's a little bit top down which means then it has to then like get its way into different places so a lot of this is coming from like legal enterprise parts of the business because they want to be on top on top of compliance but service design is how we get things done right like it's how we make sure that you have all of the access that you have all of the the user buy-in for a lot of these problems and a lot of these challenges so I think it's at the ground floor also though when things start breaking down and go wrong um and everything in between right like there's value and there's a there's a desire for for an end to end if I think about um what I see it's like um service design is a wonderful place to be like okay well what is the minimum viable product here like where's the accuracy rate that we need to capture to make sure that we can do this and that's I think where this is necessary who it definitely seems that you care about ethics and the biases in these models but it seems like somebody inside the organization like the business stakeholders and now its legislators that are starting to care um how do we what have you seen to be good arguments to convince business stakeholders to get you into the room early on um it's good business this is the whole thing it's like if you are if you're out if your algorithms are biased who are leaving market share on the table like this is just good business right these programs cost so much money artificial intelligence is not cheap it requires a new type of thinking a new type of way of looking at your numbers and new types of technology and services that you all start implementing right your data pipelines are massive data management your hyperscalers they're all in the room for this conversation so it costs some money and so if don't you want to do it right the first time right I kind of a little bit sometimes feel like a mob boss here when I'm like shaking people down right because it's like it would be a shame if your model had to get thrown in the trash which has happened there have been court rulings that said your entire AI program must be scrapped because your data was was problematic you collected information on minors and you didn't realize you were doing that and you shouldn't have right so they're all of these ways that if we're playing fast and loose with this new technology you might get a slap on the wrist and it might hurt your brand it also might hurt you uh financially because you're going to be taken to court so the the business case for this is do it right the first time save yourself some money to have more buy-in get people in the room um because the data exists for us to do things right it seems like it's almost like you're bringing in a new employee in the form of a computer in into the organization you want them to do a good job and to the best to do the best job and not to have to fire them like after the first day and uh like service design can help with the onboarding and sort of the training and making sure that you actually uh are getting good value out of this new colleague um and and I guess it's not more than that it's it's some it's something that is helping to achieve a certain goal yeah yeah and I think the other part is the other thing that people are struggling with and this is where I think services I could be super helpful is adoption so everyone wants these programs to be adopted they want people to use the technology that they're building um but sometimes it's really hard to get people to change what they've been doing right like even their data science teams are not running in the cloud they're running on prem so how do you get your data science teams to use the capabilities that you have now spent a whole bunch of money on because you have azure or you have like a google cloud service right there's all of these new capabilities so there's a great opportunity for service designers to lead to to drive that change as well now um if somebody got inspired uh through this conversation uh I know I have like what would be a good starting point if you're like yeah I love service design and this also seems like a domain that's super exciting I want to be on the forefront like where do I start which books do I read which videos do I watch which people do I follow like any tips on that um that's a really great question I feel like there's a lot of movement right now like the algorithm of justice league is a great place to start weapons of mass destruction is a book that I recommend anatomy of an AI is a great book that I recommend if you want to have a sense so this is I'll give like my background of why where I think service design and AI really sort of join and the place where I had one of these moments of like aha so um Kate Crawford who is an academic and a researcher worked with an artist to show an anatomy of an AI if anyone wants to look it up just google or use whatever search function that you prefer anatomy of an AI and you'll see this incredibly amazing document and like image it looks like an experience blueprint to me it is this experience blueprint to me and it shows everything from how you mine the tech like the the the the the things that you need to get out of the ground to make the technology all the way to the end product of an Amazon Alexa and I looked at this and I'm like this is an experience blueprint it's just in a different form and it was so amazing to see that because I'm like this it helped me understand all of those moving parts and I was like this is where a service designer could be so useful how do you bring in all of these pieces that need to be brought to the table to understand how an AI application gets built and and lives in the world and this is one of the important parts which we started with the demystifying aspect just having a visual and understanding like how the which components are there with which names do they give these components when are they important why are they important it builds your vocabulary and like just having I can imagine totally you can see that this image could be like a very good entry point and sort of yeah like impact that that that black box concept of an AI like it's not AI it's all these other things like is it is it publicly available this image yeah if you just google that you can find it and it was in the tape for a while so it's very very famous image right it's like it's art with the capital a at this point but it looks like things that a service designer could create which is so exciting to me to even think about so um sort of heading towards the end of the conversation I have a few questions left you're sort of in both spaces like the data science and the engineers but also have a good understanding of the design space what do you feel are yeah what are some of the biggest misconceptions that you've seen people having in the design space about the data science engineering space AI space I think it's just overwhelming for like designers when they enter the data science space right that it's and like I had the same feelings too right like it feels like you're you're a little bit out of your league when you're around all these people who are talking in math in front of you and they're using all of these terms that you don't understand like when I first started my job what I I always keep a notebook by my side in the back of my notebook is just all of the words that I'm hearing for the first time I just write them all down and I just keep a running list and then about a year later I'm like oh I know all of those now right so I think it's it's a lot of that it's the challenges um you don't need to know the math you just need to understand the data how it is shaped and then what it has been serviced to what does that outcome look like right like someone else is there to help you with the math and I ask this all the time I'm like I'm going to say a stupid question right now like what does that mean like I'm going to ask you a dumb question what does that mean because if we if we spend so much time letting the data science drive the conversation we're not thinking about the application and the human use of something and that's where the service designer is so crucial in this conversation and so I think the biggest challenge is not letting the data science demystify you and make you feel like you can't be part of it it's like well what is this in service to right so let's say we have um this is something that I've seen a few times it's a very sort of popular use case let's let's look at images of eyeballs to identify where there is macular degeneration we could use artificial intelligence so a lot of like image scanning and it's like image review is happening now um there's a lot of automation going on in that space I don't need to know like the vectors or right like how that works I don't need to know right and then at service designers know this too because we work on things like this all the time if you're in like the health care space like you're you're dealing with new words all the time to explain imaging and diagnosis and cutoff rates and prognosis right those are all new things that you're getting exposed to it's the same sort of space what would you say to somebody who's listening right now and sort of feels you know machine learning AI that's just for like the big organizations the the google's the facebook's the meadows of this world the teslas I'm just here like team of one small company it this isn't how is this relevant to me is it relevant to people to service designers in those areas as well completely so in different ways too right like that's a really great question because this is one of the biggest challenges of issues within artificial intelligence machine learning there's a concentration of power because there's a concentration of data within big tech the big tech has the strength to hold on all of these new models and all of this new opportunity because they are building if they have access to our data at scale there are new organizations and community organizations and consortiums who are working to fight back against that and build different type of large language models that do not have those types of biases but that does exist for the small company absolutely you should be thinking about this because you have data pipelines internally you are probably also buying third party data for marketing purposes right now so if you are doing any of those things you are thinking now about data science and so it is coming right like if you are building chatbots for your customer service if you are trying to automate your call center those are all things that now involve artificial intelligence you might not be building your own models but you might be applying and implementing other people's models and other services within your organization and so definitely you need to think about that as a service. Cool I hope many people will Julian if somebody remembers one thing from this conversation what do you hope it is? that AI is just complex computer math and that there are still people in this process they're just doing different things now and it is our job to make sure that everything is fair just and transparent and so and bring the best value not just to the business but to community as and to people. Awesome that's a great note to sort of end our conversation on thanks for coming on I really really want to continue exploring this intersection between AI and service design and I definitely am going to thank you for doing like the sequel to Carly this is a great addition I hope we can make an entire playlist one day on this topic so thank you so much for coming on well thank you this has been so much fun I'm here to evangelize about the value of ethical AI so I'd love to bring more services I was to hold. Awesome that you made it all the way till the end of this conversation if you enjoyed it make sure to click that like button and leave a short comment about your biggest takeaway thanks so much for tuning in to the service design show and I look forward to see you in the next video