 In this video, you'll hear how service design can help to develop more human-centered artificial intelligence and how you can step in and contribute, even if AI is not on your horizon right now. Here's the guest for this episode. Let the show begin. Hi, I'm Carly and this is the Service Design Show, Episode 152. Hi, my name is Marc Fontijn and welcome back to a brand new episode of the Service Design Show. On this show, we explore what's beneath the surface of service design, what are those small, hidden and invisible things that make a difference between success and failure, all to help you design great services that have a positive impact on people, business and our planet. Our guest in this episode is Carly Burton. Carly is the former head of design at JP Morgan. She has lectured on design and artificial intelligence at the University of Texas and now Carly is the head of product design at Facebook AI. The reason I'm excited to have Carly on the show today is that many service design professionals find themselves in heavily tech-led environments helping to design better digital products or, as you and I know them, services. Well, that's surely a great opportunity, but also quite a challenge. Now imagine contributing to a digital product that affects not millions, but potentially billions of users. Well, that's the work Carly is doing. Carly is working on a special kind of digital product, namely artificial intelligence and machine learning. Traditionally, domains that have been populated by engineers. Carly thinks that design can and must play a role in the space as well and you'll learn why in this episode. Now, although working on AI might not be your daily business, I still think that this is a very important conversation for every service design professional to hear. Why? Because it helps us to be more proactive about shaping this new and transformative technology in a more human-centered way. We can't wait till we get invited into the conversation. We have to step in and take our responsibility right now. So if you're curious to learn how service design can help to shape AI in a more human-centered way and what this means for you, well, make sure you stick around till the end of the conversation. By the way, you'll probably hear some AI related terms that you might not have heard before. Well, don't let that discourage you. Rather, see it as a first introduction and an open invitation to learn more about this fascinating field. If you enjoy conversations like this that help you to grow as a service design professional, make sure to click that subscribe button and that bell icon because we bring a new episode like this every week or so. Well, that about wraps it up from the introduction. And now it's time to sit back, relax, and enjoy the conversation with Carly Burton. Welcome to the show, Carly. Hi. Thanks so much. I'm super excited to chat with you today. So am I. So look at that. We're going to talk about a topic that I think we should be addressing way more often inside our service design community. I would say there isn't, I haven't seen a lot of expertise around this, so you're definitely bringing something new to the table, which is really exciting. But before we dive into that, Carly, could you give us a brief overview of what you do these days? Yeah, sure. And thanks again for having me. So right now I am a product leader at Metta. I lead a team of designers, researchers, and product managers on the development of our tools and services and frameworks for the models that we create here at Metta. All right. Sounds intriguing. And I definitely want to dive into that. But before we do that, we always do the lightning fire question round. I've got five questions for you. The goal is to get to know you a little bit better. And the goal is also to answer these questions as quickly and briefly as possible. Are you ready? All right. Let's do this. All right, Carly, what was your first job? Oh, I guess you could say it was mowing lawns in the neighborhood. Cool. What's always in your fridge? Oh gosh, I have a one-year-old, so definitely whole milk. If you could recommend one book, which one would it be? Oh gosh. The power of habit, I guess. Yeah. We'll add it to the show notes. The fourth question is, if you could be an animal, which animal would you like to be? Oh, I've actually taken one of those, an eagram test, and I shared that I would be a wild horse. Wild horse, all right. And the final question on the list here is, do you recall your first encounter with service design? I do, actually. I was working at Frog Design. I think it was 2009, and I went to a small studio at a design place called Fjord to meet with some of the designers there, and we had a chat about service design. I felt like it was that time when everyone was talking about experience design and different ways of thinking about design from the traditional sense. So definitely imprinted memory for me. It's remarkable that I've been asking this question for 152 episodes, and I think almost everybody has some recollection of that moment. Which is interesting. I suppose it's... I just wanted to say, I suppose it's sort of that aha moment where it was sort of like these were things that we were solving for, but perhaps there hadn't been something to sort of fully encapsulate it, and this gave that opportunity to start to have that discussion of what does it mean to orchestrate these different touch points. Yep, those aha moments. Yeah, cool. When I was reading through the notes you shared with me, there were some really inspiring things there, and one of the things you mentioned there was designers have the opportunity to make the world better, and let me grab my notes here. Make AI development more responsible, like big, big topics. I'm really interested in how you see this. So we're going to explore the topic of design and AI and responsibility, but let's go back a few steps and start with how do you get yourself tangled up inside this topic? Yeah, so I remember it probably started at a table in Midtown Manhattan. I was sitting with some engineers, and I think I probably understood not even a quarter of what the conversation was about, but we were talking about building data lakes, and one of the things that struck me was that, wow, a lot of designers aren't sitting around the table right now, and one of the things that we were doing is as we constructed this data lake, and we were developing these models, that then had an implication on what the experience would be, and once the model was baked and deployed into the solution, it was too late, and so this was about, I want to say, gosh, seven or eight years ago, and in that moment I said, I want to be a part of this. I think design should be a part of the development of AI. There's so many unintended consequences, and I just don't think that people mean to create harm. I just think that it's complicated, and the inputs that you have, the lack of sort of transparency, explainability challenges, that once you come to the output, the outcomes that are then created, I think we just need to be a lot more interdisciplinary in how we think about that, and so yeah, so that's where the road began. I had some great sponsors along the way to help me, and started developing literature around it, and teaching in academia, and yeah, now I'm here at Metta, as you said, the opportunity to help designers change the world. So this is going fast, and you mentioned things like data lakes, and I think it would be good to express some of these terms, but you sort of got yourself into this situation, maybe slightly by chance you were at the table with some engineers who were already thinking and doing stuff related to AI. I'm really curious, in that moment, which, you mentioned already a few, but which challenges or opportunities did you see that you thought, well, somebody needs to be addressing them? Yeah, I think the challenge is always that designers don't feel like they have a seat at the table, but I think that that doesn't mean that you shouldn't ask, and I think it's about how you start the conversation. So that was always a challenge for me early on, but then opportunity-wise, I think that it's a growing moment for everyone. It's an opportunity for me and myself to learn. I think it's an opportunity for the ML product engineers and data scientists to learn. Too often, people still think about design in its traditional sense. It depends on what their relationship was. I mean, I had people even only thought about design through industrial design. Others just think about it as graphic design. Some people think about it as making a PowerPoint pretty. So you have to know your audience and where they're at. I think designers were really good at building empathy, and if you use those tools, and that's that opportunity, you can see where's this person at, how do I need to have this conversation? And then the challenge is looking within yourself and saying, yeah, this is an esoteric space. I have no idea, really, if I can add value, but I'm going to try to lean into this. Got it. And still, and this might be a rhetorical question, but I'm very interested in your perspective. What did you feel design could add to the world that was probably very tech-driven and engineer-driven that was already focused on AI? Again, what? Yeah, breakthroughs, really. I mean, we've seen different cycles of AI, and if you look all the way back to where they were at in the 1960s and where we are today, and I think that there's huge opportunity in bringing in different disciplines and asking kind of questions and challenging the processes. And that's also why, not as a sort of a traditional service designer myself, but I know for your audience is that I think service designers have a really special craft in being able to look at the sequencing through service design thinking and bringing people together to co-create. So that to me is the opportunity, is the breakthrough, but also the topic that we talked about sharing today is responsibility. How do we just make AI better by design? When we think about the governance of it, accountability, transparency, privacy, fairness, all of those things. So I'm going to ask a lot of questions today, because I'm also exploring this topic and I'm trying to sort of unpack some of these terms that you mentioned when you say make AI better. What are some of the aspects that we could be adding value to? Like how, what is it that we can make better related to AI? Yeah, I mean, that's a great question. I mean, it depends on the type of model that you're creating. So let's say that it's, I'll use the classic case actually that was talked about in a political article, a while back. They had trained a model to help with sentencing of criminal cases. And one of the things that they found when they looked at the outcome was that the certain population was actually getting higher sentencing and more harsh sentencing than the another profile. And one of the things that you can trace that back to is sort of like, what was the data inherently biased? And then not only maybe was the data biased, but then sort of like, how were the thresholds set? And so I think that when we think about making models more fair, that's kind of, I think, a perfect example of like, here's the outcomes. And then the one thing that, you know, when I read that article and I want to say, gosh, it's like probably like at least five or six years old now at this point. So it's an old example, but I just think that it's a salient one is that we can create models and they can perform to a certain extent. But then there's also designing for the moment of what you do with the output. And so I think that often, whenever you have that, let's say in this case, the recommendation on what that sentencing should be is, does the person who's receiving that information know what is encapsulated in that recommendation? And how do we think more about what we do with the model outputs? So when I think about design, it's not only about, you know, encouraging and thinking about the processes in terms of models being developed so that in that case, you know, the outcome is better. But also, how do we design for the moments of interaction and understanding? So you see that the design community is able to bring value to the AI field from that perspective that there is a skill gap or a knowledge gap or mindset gap currently that design can be used to fill the gap. I think design has a really critical role in how not only AI is developed, but also how it integrates into experiences. And I think that's where service design and some of the methodologies and, you know, the thinking and doing aspects can be really powerful because what you're looking at is sort of an end-to-end experience. And you're considering all of the factors. And often I think when people are developing a model, maybe they're solving a very discrete problem, but they're solving it in that instance and not thinking about that picture holistically. And that's where some of the unintended consequences can come out. Yeah. And I'm noticing that I'm already struggling to come up with the right and proper and formulating my proper questions. So I'm prototyping as we go along. And these unintended consequences, that's super interesting. And also when you mentioned that it's maybe developed in isolation, or at least it's not taking into account the broader context. And that's maybe what service design does bring, the broader context, the broader perspective, and being more mindful about the consequences that might happen or the things that you need to keep in mind. One final thing I want to ask you about this is, so what happens potentially if we don't get involved, if we say, okay, this is not our task, let the engineers focus on, let it be their domain. What do you see happening? Yeah. Oh gosh, I don't think I've thought so much about that because for the last seven or eight years, it was just sort of like, we must, we must, we must. And I think that perhaps where that motivation came in was that, I just think ideas and work is better done whenever you have an inclusive problem solving and that inclusivity being from a range of disciplines. I think we all have something different to offer. It's one of the reasons why, you know, I love living in New York. We have people from all over the world and the collision of different ideas, different perspectives, different ways of seeing the world and problems. I think you ultimately come to better outcomes. And so I suppose maybe the risks are we don't materialize the full benefits of AI. We risk not building responsible solutions, you know. I think we've seen cases where technology for technology's sake can go really wrong. I think that things really need to be in service of people. And one of the studies that I did many years back, understanding what the future generation expects out of technology, and they expect a lot and it's interwoven into their daily lives. And if that's the case, what does that mean for sort of like the social dynamics and for the fabric, you know, of our communities. And that's why I feel, you know, design, traditionally we always talked about, you know, is this viable? Is it valuable? Is it feasible? I think that we should also be thinking about, is it responsible? Is it sustainable? And designers, I think, uniquely with our training have that opportunity. So you already mentioned sort of a case of where AI has been used and where there were unintended consequences. And I'm sure that by the day we get more and more examples of good and better examples. And you've been at this for seven, eight years. But I'm curious, like for somebody who is listening right now and has never been exposed to this material, could we go through like a scenario of how a service designer might get involved in a project like this and a challenge like this? Like, could you take us through what that would look like? Yeah. That's a great question. I think that the first advice I always give people is to sort of like stop reading articles and like pick up books, or take some really difficult courses to just familiarize yourself with the language. It gives you the post to feel anchored any time that you approach a conversation. Early on, I sat with a friendly data scientist and I asked him to teach me about the basics of statistics again. It had been over 20 years since I had thought about those things. But I needed to understand what they were going through as they were trying to kind of fit the model. And I was doing research at the time and the more I understood the technical aspect, the better my inquiries could be and then the more trust kind of we built together. So I think the first strategy I would take is like educate yourself as well as find somebody who's willing to give you the time to just take you through some basics. The second thing I think is to create the opportunity to drive influence by starting with wanting to gain an understanding. An example case that I could share sort of hypothetical might be because a lot of different people are trying to figure out, you know, how do I use auto ML? Back to your point around maybe needing to describe definitions. What is auto ML? Exactly. Yeah. So if you're unfamiliar, automated machine learning is the process of automating potentially every step of the ML process from the beginning of selecting raw data sets to the authoring to the training all the way to model deployment. So if you're also unfamiliar with the ML development process, like maybe that's a good place to get anchored around, okay, like what happens at the beginning, maybe somebody develops a hypothesis, they select their data and so on. So that might be an important aspect too. So, okay, so in this example, let's say, you know, you join the team and like, yeah, like we're gonna do this and you say, well, my boss told me that I need to be on this project and they look at you like, what are you doing here? Why do you need to be here? I think, you know, in my early days, I would have been like, well, you know, like, who's what, you know, what's your hypothesis? Like, what are the user needs for this? Like, what implications are they're going to be? And, you know, coming with all of these questions. And I think that over time, what I found is really kind of taking a step back and asking them questions that help really understand, like, what's the motion and then what inputs you can give in that motion and to guide them to what tools and services you have as opposed to coming and saying to them, well, I have this service blueprint. And if we define all of these things, then we can figure out what it is you need to build. Instead, I think it's sort of like, well, based on this value you're trying to deliver, instead of fully automating this, have you thought about a human in the loop moment? Might this be a better use case for it augmenting an ML developers experience? And basically kind of just like having that conversation and then teasing out ideas. And then they can say, that's interesting. And then you can say, oh, well, I have like a tool that we can use to maybe capture that and work through that together. Why don't we co-create what that might look like? I'll pause there if this is helpful. Yeah, yeah, yeah, yeah. Thank you for the pause, because I definitely want to touch upon that. So one, I really like that you mentioned, get yourself familiar with at least the basic language, the basic vocabulary, so that you can ask better questions. And the other thing I like, and that is my interpretation of what you just mentioned is, at the start, be the user advocate and start asking questions around death. Like, from a curious perspective and trying to sort of always bring up the fact that somebody will be using this. Somebody will be- At some point, there's a human in the loop. At some point, there's a human. Always, yeah. And that could be your role at the start of the project. Always, yeah, being an advocate for that. What happens next? Yeah, so I think then at that point, the beauty of a lot of these tools is that often whenever you create them together, so if you're trying to create the sequencing of these steps to identify where is that human in that point, it's an education moment for you and then for them as well. And so as opposed to maybe working in the dark in a silo and trying to create this beautiful happy path of what this should be and then bringing it back, I think it's about always sitting together, working through it, and being able to understand what the trade-offs are to get to that sort of like, you know, some people say minimum delightful product, minimum viable product, minimum valuable product, whatever is the definition that fits your culture. But I think that it's about working through that together. And then in some cases, the outcome may be that your artifact was just identifying influence points, micro moments along that journey. Perhaps there was no interface. Perhaps it was more of a sort of a requirements definition that came out of it. And so I think being flexible on what your output is and what that artifact is and knowing where influence and impact comes, that's kind of maybe the third piece that I would think about in these type of examples. Now, when you mentioned this, I'm, I'm really curious, what have you seen are the triggers that that sort of pull people into your expertise? As in, they might not, it might be a blind spot for them that it's that they need to involve the user more, that they need to think about the user more. And when you come in, like you said, like, what are you doing here in this room? What have you found to be good triggers to get people excited about what you're bringing to the table? Yeah, you know, I think that it's, it's certainly nuanced to sort of like, what problem you're solving. You know, and perhaps maybe that that's the other sort of meta point to make or macro point is that you always need to be able to understand, like, what's the larger business priority or what larger impact are you, are you trying to make? And how can what your effort be in service of that? So if, for example, it's about adoption metrics, or if it's about, you know, behavior change or sentiment, I think really understanding the intended value or the risk in not getting it right, that can give you the weight in the room. And if you're able to pull that thread, I think it creates the space to have the conversation a little bit more. And if we try to tie it back to what we talked about in the beginning about responsibility as responsibility, also one of those drivers? I certainly think so. I mean, when I think about design principles around AI, I think responsibility is a critical one. I mean, I think ML excellence, it's not only around the sort of like, the speed in creating the models, but I think it's the quality of the models and having responsibility dimensions to that is critical. Other things that we think about is, you know, when things are transparent, you know, like how can we get visibility and debugging efforts or, you know, understanding if your models overfit and, you know, so that's another critical principle. And perhaps even a third one, I would say, is effortless. And effortless not in the sense of this can all kind of be automated and I'm not putting in, but freeing up the engineer from sort of like lower order tasks to think about the higher order problems when they're crafting these models. So how do we just make that experience better by design for them so that the quality and the ease of creation is there and therefore the models are more performative? I'm writing it down as you shed as, and I want to ask a question, a sort of reverse question around these three things just to quickly get your personal take on it. And it's often really hard to define something like responsibility, like at least for me, I think. But if we flip it around and I ask you like, what is a non-responsible design? Yeah, so when I personally think about responsibility, I think about not only in terms of the process that you took to craft the model and putting the right type of diligence and having sort of like deliberate decision points along the way around, have you really evaluated your data set so that it's comprehensive, complete, and consistent. I also think about it in the sense of the performance. Let's say that someone's going to be making a decision off of this. Have I been able to explain my model output so that they understand what trade-offs that they have? And an example, whenever I taught students about this, as I gave the example of the train here in New York, it'll say trains coming in two minutes and you think, oh, okay, two minutes, I don't need to like run down the stairs. But then you get to the platform and the train's already gone. And you're like, wait a minute, the prediction was two minutes. That was the sort of probability that there was a probability in that prediction. But they're not explaining that. They're just saying two minutes. So that's like a simple example of like, when you think of maybe even greater implications of, if I just miss the metro, it's okay, I'll grab the next one. But if I'm trying to make a really important decision, I want to understand what the implication of that trade-off is. Right? And then I think like a third one, personally, when I think about responsibility, is just by creating this, what change am I making through that? And that's really the full circle and something that I think we should always think about when we introduce new products and new services is that it's going to, especially with some of these really powerful models that are augmenting decision-making, it's going to change the way that we go about doing things. And that is why perhaps I said we shouldn't only be thinking about viable, feasible, valuable, we should be thinking about responsible. Okay. So one of the things that I'm still curious about, and you briefly mentioned that before, people who are listening to this are most likely from the service design community. They are interested in service design. Again, they are maybe not intrinsically interested or motivated to think about AI. But if they get the opportunity to get involved, like what is the thing that they shouldn't do? Like, what are some of the pitfalls that they should try to avoid if they get the opportunity to work on challenges like this and they basically don't want to screw it up? Yeah. Thanks for asking that because I wish that I would have gotten that message many years back to avoid. I would say if I were to give a top three, I think the number one is like, don't talk at people. I think often people say, this is the way and I have this language of the process that it should be. And I think that often because it's a space that hasn't been incredibly inclusive for design, if you just come at people as opposed to wanting to gain an understanding, I think that that is a critical starting point because if you start with wanting to learn and more curiosity, I think that you'll realize that everyone's kind of having their own challenges through this. It's far more complicated than people understand it to be. And they certainly would love help and figuring out a way to move forward. So seek to understand before like speaking at. The second point I would say is like, be incredibly flexible. As designers, we have a lot of different tools and there's not one way to solve a problem. Often I have found sometimes when people are too precious with their process, they have a hard time breaking through and I think that you have to look at what does it mean to potentially be more agile with my insights? How do I work in a hypothesis space as opposed to maybe really working bottoms up? And those were some of my early challenges is that I felt that this was the right process and I did not want a hypothesis. I wanted to start like, you know, with a completely open open mind and build up perspective. So that would be the second one. And then I think the third one is like compassion. Compassion for yourself, compassion for others. Because it's not easy, you know, I mean, if it was easy, the models would all be great. And like we wouldn't have all of the things that you read about in the news and all of the like different disappointments. And I think that, you know, especially when you're trying to grow your own influence and impact, it's easy to feel insecure or question your value. So, you know, have your own compassion. But then for others, you know, try to really empathize and learn what they're going through. And I think if you lead through that, then it creates an environment where people feel like it's okay to fail and try and because I really do think people are often in service of trying to create the best that they can with they can. And yeah, it's just it's hard stuff. Yeah, that's that's a really good point. Like just realizing that everybody is trying to figure this out and doing their best. And it might be intimidating at first, if you don't speak the language or don't understand what people are talking about, but also recognizing that they're also just still figuring this stuff out. Like you said, it's complex, it's new, it's we haven't we haven't cracked code yet. I'm sure you also have a perspective on it's great that the design is sort of infusing this space that it's sort of getting in there. But what are some of the things we could do to accelerate the adoption of the design practice within this space? Yeah, I think that I think one of the ways to accelerate is sort of talking more about it and our community, you know, bringing more people to the space. That's that's one way, certainly. I think the second way and one of the things, you know, that I've invested in is, you know, going into academia where designers are learning their craft and introducing to them really early on the influence that they can have and encouraging, you know, cross disciplinary studies. I see a lot of designers now graduating, not only with sort of like human factors programs, but blended with computer science. So I think, you know, there's the sort of like, you know, building momentum through sharing the word, but then also through growing the talent. And then I think that, you know, if you're working in a company that's trying to figure out how to use AI or ML, or even working on a product, you know, identifying what are the problems? Like, how can you get involved and looking for these case studies? And I think that it has to start from the design community. I don't think that others are going to stop and say like, oh, we should really, you know, have a designer try to solve this with us. But I think in that effort, you have to recognize where to put your energy. Because in some cases, like, it's going to just be hard to get people there. So figuring out where your own ROI can be, I think is really important. And something that I learned over time is that do they have the right investments in place? Like, are they willing to, you know, to open this seat at the table once the door is open? So it really drives influence. Yeah, I'm curious about that. Like, how do you recognize these situations? Because maybe if you're just starting out, every opportunity is like something that you want to grab and learn from. But as time goes on, you see that some opportunities are better opportunities than others. If you could give some of your reflection on how do you spot the opportunities that are right for you to invest your time and energy in. Yeah, that's a great question. I think that leadership is always really important. You know, I joke with my friends, and as a mother of both a four-year-old and a one-year-old, I had done some early volunteer work in public schools here in the city. And I saw how much a principal made a difference for the teachers, and then the teachers could make a difference. And I use that as kind of an example is that I think it's the same thing with companies. I think the leadership really makes a difference. They need to be invested. They need to have a clear mission and vision. And that is shared. And it empowers the people to get behind that. So that's one critical piece is really like evaluating the leadership. I think the second piece is the strategies. You know, you're going to have to have your product strategy. You're going to have technical strategy. You're going to have operational strategy. And taking a look at that, because that's sort of the translation of the vision and vision. Like, put those two together. It's the translation. It's how you actually ultimately achieve that. And how solid is it? And then the third piece is like, okay, well, you know, with that, then do you have the right resources to do the work? Because you're talking about the need for, you know, compute to do have capacity. You're talking about hardware. You're talking about talent. You know, there are people who are going to like talk the talk and then people who are walking the walk. And so you want to kind of look at those three, those three buckets as you're evaluating possible opportunities for yourself, I suppose. Makes a lot of sense. So I would like to try to do an exercise with you where we sort of look in the past and look into the future. If you look back three years and we think we try to describe the industry back then today and where you hope it will be in three years, like, which narrative comes up? Yeah. So I would say three years ago feels like even longer because of the pandemic. So I'm starting to like get real on the horizon. I think that there was great momentum. I mean, if I look at some of the programs that have been started in the last five years and the momentum around them, I think that you're seeing this build towards, yes, like, you know, open AI designers being part of the problem solving a lot of there's just been a lot of movement. And so I would say where we're at today is that people are starting to define some playbooks and they're looking at what does responsible development look like or how do you rationalize when or when not to use AI or ML in a product or process. And so you're starting to see some of these early tools in action. So then if I were to cast forward, I would hope that we're starting to solidify on some really clear ways and seeing the proof points behind them and good case studies, and that there's like a whole new cohort of leaders in this space and that, you know, we're doing less educating and we're doing more influencing and people are really enabled. And yeah, we're starting to see, yeah, great, great progress. What I find interesting and sort of hopeful about the story that you describe is that if you are interested in this, there's so much you can still do and influence and help to shape this community, this cohort, this field, like, it's like, if you're up for it, it's a great opportunity. Like maybe there are a few playbooks, but there are there are probably a lot that still can be made and that might be your sort of mission, right? I completely agree. And when we start to look at the new surfaces that are being, you know, designed for, you know, in VR spaces and AR spaces, our mediums are even changing. And, you know, AI and ML is going to be present and so many different dimensions of our experiences. So I hope that, you know, your listeners get excited and say, like, all right, it's never too late to learn a new trade and tackle new problems. I'm always learning. I feel that and that's perhaps why I said, you know, practice self-compassion in this is that it's really easy to get overwhelmed. And I look back at what I thought I knew five years ago and then I'm like, wow, okay, we can know so much more. And it's a life learning journey in this space for sure. And I think as designers with our craft, that's that's ultimately what we're also trying to always do. Yeah, absolutely. If somebody made it to the end of this episode, because we're almost there, what is the one thing you hope that they will take away from this? If there's one thing that you take away from this, I hope that it's you found a way to use your service design thinking to influence the development of either, you know, models or products in a more responsible way. I think that there is that opportunity. You were the expert at your craft. And I hope that you take away that there's a real opportunity to drive that influence into your service design thinking. And yeah, I'm excited, I guess, to track and see what momentum maybe comes from here. I really hope that we can get more practitioners and people who are already in this space to sort of serve, I don't know if raw model is the right word, but at least a light, a guiding light to show what's possible and to inspire people to travel even further. So I'm already thankful that you came on the show, shared what you could share. I know a lot of the things that we potentially could talk about, but maybe not in public, but still nevertheless, thank you, Gali, for coming on and sharing this with us. Yeah, thank you so much for having me. And, you know, I hope to the community, like, open up, you know, share the challenges that you've been going through this. I think it takes a village to enact change. And I think that by reaching out and using places like the show to like learn and then engage, it's an incredible opportunity. So, Mark, thank you so much for having me. I really enjoyed our conversation. And I look forward to continuing to follow the show and hear from more service designers. Awesome that you made it all the way here. I really hope that you enjoyed the conversation with Gali and learned something new. I'm really curious how do you feel services I can contribute to the development of AI? Make sure to leave a comment down below and let's continue the conversation over there. Thanks so much for watching and I'll see you in the next video.