 Welcome to the 3a Institute. My name is Rob Hanson. I'm a senior honorary fellow here and today I am very honored to have the delightful Laman Nukman who's a fellow at Intel and also the director for the Anticipatory Computers Center. Did I get that right? Computing lab. Computing lab, sorry. And we're gonna have a conversation today about all the fantastic work that she's doing particularly around sensing and sense making So we might start there actually. Do you want to tell us a little bit about the lab and sense making and what that's all about? Sure. So the lab is actually a multidisciplinary lab. So we have people in user research, ethnography, design, machine learning, embedded systems, software, hardware and really what we're trying to do is make technology more aware of people, their situations, you know, anticipate what it is that they're trying to do and then assist them in different facets of their lives. So we typically look at different types of applications where we can apply that intelligence towards and try to build these experiences in a way that at the end of the day it's really helping people accomplish what it is that they're trying to do in a more delightful way from a human machine interaction. And of course, you're famous for helping Stephen Hawking. Can you tell me a little bit about that? Sure. So actually, that's that's really an interesting application of these technologies. Right. So if you think about, you know, people with this ability, every single interaction is quite costly from a time perspective. Right. So it takes Stephen quite a bit of effort to, you know, type one letter. So one of the things that we take for granted when we think about computer interfaces is that we assume that people can easily type or move a mouse or do any of these things. Right. And when you're really, when you have very limited ability to actually do these things, you have to make the system much more intelligent in anticipating what it is that you want. So today, we all think of, for example, word prediction. Right. And word prediction is a very obvious example. Right. You don't have to actually type all the different letters. It's going to assume what it is that you want to do. Now think about anything that you might want to do with your machine. Right. Being able to anticipate based on your context of use, what it is that you might want, and give you a few set of options rather than have you go do it. Right. Makes it much, much more efficient to actually use a system like that. Right. So with Stephen, for example, you know, anything that he's doing, not just typing letters, right, but like opening files, surfing the web, doing all of these things, what we ended up doing was, you know, watching him use his machine for a while, we understood what are the things that he's trying to do. And then we would understand from a lot of the clues of what he's doing on his machine, what he might want to do. And then we would go and surface a few set of options for him that he could select from rather than go and scan the whole screen trying to figure out what point on this, you know, 2d space he's trying to actually reach, which is kind of ridiculous if you think about it. The other thing that we also did was, so Stephen used a sensor, an infrared sensor that sat on his glasses to detect essentially his cheek movements. So every time so imagine a system that's constantly actuating things. And when the thing that he's interested in is actuated, he would move his cheek. And that sensor will detect that the cheek was moved and will trigger whatever is on the screen. The problem with this is that Stephen had very different motions depending on how tired he was, whether it's morning or evening or stuff like that. So one of the things that we did is we actually created an adaptive sensor that understood his range of movement and then was able to filter a lot of the noise that was happening in a way that, okay, now, you know, you don't need people to constantly be adjusted, adjusting with the threshold of that sensor is, for example. I'm interested in the enabling technologies behind that. So my understanding from some of the things that you've been saying and from some of your work is that it's only being because of recent advancements with technology that we've been able to have technology that's been, first of all, that sensitive, but also then able to have the smarts to have that adaptability. Could you talk a little bit about that? Yeah, sure. So I mean, if you think about it, being able to create systems that are contextually aware, what do you need to do that, right? You need sensing. So we've seen a huge advancement in terms of the fidelity of sensors, the size of sensors, the power consumption of sensors, right? I mean, sensing has improved dramatically over the last 10 years, right? So now you see it embedded in everything and it's, you know, it's not costly in terms of cost or power. The second thing that needed to happen was to have enough computation so that you're able to actually model the sense making part of it, right? So a lot of these techniques use machine learning to look at all of, to, you know, ingest all of the sensor data and then create, you know, models that can actually comprehend out of that sensor data what it is that somebody's doing. So that advancement in compute, in computation, right, allowed that to happen again at very low power, right? Because a lot of those things sometimes they're embedded in our everyday life, right? You're not assuming like supercomputers sitting somewhere doing that, you know, stuff. And then the last part of it is connectivity, right? Because a lot of times what you're really trying to do is bring data from other places and you're trying to actuate things that are in the digital world. So being able to actually use that so that you're able to have that connection means that you really need good communication. So I would say that the advancement in all three is what's really enabling us to see tons of usage of these technologies today. You seem very motivated around disability. What sort of things drive you? What sort of things interest you about the ability to use this technology in these new ways? So, I mean, one of the things that I think is a really special time that we're in is that, you know, if you look at what we've instrumented to be able to control the physical world, and we've done a lot of that, frankly, not because of disability, right? I mean, we as a society, but because of convenience, right? So the idea that you're able to control, you know, there's home controls of everything, right? You're able to control anything in your home from your smartphone, right? It's something that people have done because of convenience and that works because of like the economy of scale, right? But if you think about it, given the fact that now all you need is an interface to a computer, to control your physical world, that has a huge potential for people with disability, right? Because now everything that you need to do is just solve that first part of the problem, which is how do you enable someone with disability to actually control a machine? The minute you do that, you've enabled tons of independence, tons of independence, right? They can control anything in their digital world, I mean, in the physical world through that one interface. So to me, that was a big part of the thing, right? How do you enable people to continue to be independent even with, you know, all of these disabilities and hindrances? And it's actually kind of interesting because like, to me, the really amazing thing about these technologies when we think about contextual awareness is if you think about them as a spectrum, right? Because I would argue that each one of us has certain contexts where we're disabled, right? When you're driving a car, you can't be, you know, distracted, you can't be looking at your phone, you can't, I mean, I do this all the time, but I shouldn't be, right? So if you think about it, depending on what part of our lives, you know, we're engaged in, there is a notion of disability in there. And for technology to be able to actually comprehend that and tailor itself in a way that is the most suitable for the context, that enables people at a whole range of abilities to actually be able to, you know, interact with that world without actually having to create something specifically for the disabled, right? That's very powerful. What excites you about the future of technology and where that might go, particularly in that regard? What do you think it might empower and enable us? I mean, I think, you know, in general, if you think about where technology can go, right? I mean, there is really no limit to it understanding everything that's surrounding, that's, you know, surrounds it, and then really facilitate everything for us that we need, right? So if you think about it today, we make a lot of assumptions about how we need to interact with technology, right? I mean, in some sense, I keep joking that the technology really trains us to interact with it in a way that it can understand us, right? And as you see improvement in these technologies, I mean, we've seen a lot of improvement in these technologies. I mean, let's, I think the best example is actually speech recognition, right? Ten years ago, speech recognition was useless, especially in, I mean, unless it's like a perfect environment, it was really not something that was that usable. And even the way that people designed systems that had speech recognition, there was always this option to back out of a speech recognition system to use some, you know, keys and whatever, because, you know, you knew that these things systems failed drastically. They had a huge potential because as I mentioned, you know, you're driving a car, you're doing all of these different things where you can't actually be typing anything. That's a great interface, right? But the way we spoke, right, we spoke so that we can be understood, right, with these machines, the way you actually gave commands, which is not necessarily about speech recognition, but either, even NLP and command recognition, we had to essentially almost act like robots for these systems to understand us, right? Now, if you start to think about where technology is headed around autonomy and, you know, autonomous vehicles or whatever example that, you know, you see kind of where the world is headed, you know, robots that are acting on their own and working with humans, it's impossible for that to scale if you don't solve that human machine interaction problem, right? It's impossible. So to me, if you think about human to human interaction, it's so contextual in nature, right? You don't repeat everything that you assume the other person knows, right? So in my mind, where technology really need to get to is to the point where it understands what it is that you know, it understands what it is that you want, you can understand what it is that it knows, right? So there's a transparency part of that problem, right? And then really interact in the most, you know, efficient and pleasant way that one can have, just like you would be interacting with a human. That's really fascinating. I'd like to use this as a moment to sort of transition to something else. And it comes to the point that a lot of people are talking about what we call AI or artificial intelligence at the moment, and really it's a cluster or constellation of technologies, as you mentioned, there's natural language processing and machine learning and all these other things. And often we hear people say, I need more data, I need better data, I need higher quality data. And in fact, they have access to that data. And you've done some really fascinating work in this space pointing out that actually, there's a lot of stuff we're currently collecting that we don't know that we're collecting. So if you could let us know a bit more about that. Yeah, so, so I mean, I think, yes, there is a problem of needing a lot more data. But that problem really comes from the fact that we're using a lot of techniques that are very data hungry, right? So if you think about a lot of the advancement that happened in perception recently, right, whether it's, you know, being able to detect objects or people or whatever. There was a huge advancement because of applying deep learning towards that technology, which is awesome. But deep learning requires tons of data, right, to go train. Now, to me, what's interesting about this problem is there are a lot of things that are enabling us to get a lot of data. But it's so much easier to get data from the digital world, right? I mean, you know, imagine trying to understand, you know, something like what's trending in the world, right? I mean, Google doesn't have to work very hard to understand what is trending. Everybody's using a search engine to look for something, right, or Facebook or whatever, right? The whole advancement in terms of identification of faces and objects and things like that. Well, I mean, what was Facebook doing, right? They had people tagging tons of pictures, right, with all the things that they want. Like it's they solve the ground truth problem by having people do that for them, right? And now you have massive data that's tagged that you can easily train these systems. So in some cases, yes, I think we have tons of data to enable us to do really much better models and things like that. The problem is when you go into really the messy physical world, right? When you start to talk about something like, you know, autonomous vehicles, for example, you have to get data from the physical world. And yes, you know, so many companies are actually trying to collect data from the physical world to train these systems. But if you think about it, being able to capture the cases that these things will break down, you have to have an unbelievable amount of data to find all that long tail of all of these different problems that in general, you're going to have to encounter after millions and millions of hours of driving, right? So I think, I mean, there is a problem with data, but there is also a quality issue. There is also a ground truth and annotation issue, because it's not just sufficient to actually just have data, but you have to have some, you know, some cleaning of that data, some ground truth, blah, blah, blah. To me, one of the things that I believe is really important when you start to build these technologies is to actually get data from the physical world that is realistic. Because with a lot of the contextual awareness, capabilities and things like that, one of the problems that we always run into is you do these things in the lab, you take it out into the real world and everything breaks down, right? So it's really important when we're actually doing the data collection to collect it in a place that is realistic to the thing that you're actually trying to use it for. And that requires a lot of effort. That's not something that's usually available when you're collecting a lot of data from the physical world, but it's not something that we can do without. So sometimes the senses out there, though, are actually capturing far more than we plan for them to capture, aren't they? And so we can actually do things with that. So there's some interesting work you've done in that space. And I think there's a lot of potential there. Could you tell us a bit about that? Sure. So one of the things that I think, you know, when we think of sensors, we usually think of things that we're built for the purpose of sensing the thing that we're trying to sense, right? So a great example of that is an accelerometer, right? It's actually measuring vibration. That's what you want to sense. You want to translate that into either an orientation or a human motion or whatever that is, right? So, you know, we understand that as a sensor. However, if you think about it, there are so many things that we end up impacting in the real world as a result of our activities, motions, etc. that can be then used as sensors. So there are multiple examples of that. I'll use wireless as an example. So if you think about wireless, we think usually of wireless as a communication mechanism, right? Your Wi-Fi is used to actually communicate data. However, Wi-Fi signals get impacted by people, right? Because of the fact that, you know, we end up reflecting these signals that, you know, the multi-path impact, you know, all of these things. So if you start to think about that and say, well, can I now use the information that's coming back from the wireless, right, in terms of signal strength and things like that to try to model human motion or trying to understand in a space how many people there are, well, that's possible, right? You just need to be actually looking for that noise in that signal rather than what these systems typically do, which is try to suppress that noise because if you're trying to communicate, you don't want your communication to be impacted by how many people are in the room. But if you're actually trying to sense how many people in the room, you want to amplify that part of the problem. And we see that in many different things, right? I mean, you know, we also use things like RFID where people think of RFID as a mechanism to identify an object, but also when a person touches an RFID tag, it totally changes the behavior of that signal, right? So again, you can actually build a detector for human motion using RFID. And the nice thing about this is RFID can be totally passive. So you don't even need to power these things. I mean, imagine a mug that you just put a passive RFID tag on. And now you can absolutely figure out what once somebody's actually drinking, right, without even having to have a camera or anything. So a lot of potential here for utility in spaces that we hadn't thought about previously. Does that mean that we need to think about the ramifications of that and the impact of that? Yes. And, you know, this is always kind of the interesting side effect of that, right? I mean, being able to extract information out of something that people have not necessarily assumed that that was possible, that ends up really opening up a lot of concerns about privacy and leakage of information and things like that, right? I mean, it's actually even really interesting. I mean, like recently I've been thinking about even something like, you know, security issues, right, with computers, right? The fact that the temperature or the energy profile or whatever says something about what's happening inside that. I mean, that is a signal that one can use to actually try to gain more understanding of what a system is being used for, right? So it's in some sense like we constantly leak information in the way that we interact with this world. Now, one of the things that I like about having all of these different options, for example, if you're able to detect human motion or falls or something like that through wireless signals, one of the nice things about it is that, you know, while this opens up things that you can know about the person that you might not have been able to know, it also opens up an interesting space of trading off what type of technologies do you use for sensing. So one of the things that we work on, for example, aging in place, right? So you're trying to enable elderly to stay in their homes for longer. But there are a lot of concerns, right? I mean, people can have more accidents, they can fall, all of that stuff. Now, a lot of times what people think about this, well, let's put a camera in that space. We know from a lot of the research that we've done that, you know, people don't want to be monitored with a camera, people worry, I mean, like in many of the cases, we will just turn off the camera, right? So if you're able, for example, to detect these things through something like a wireless signal, well, in some sense, that's more privacy preserving than if you were actually trying to do it with a camera, right? So in some sense, you know, what's really important is disclosure, because people really need to know that a wireless signal can be used for the purpose of detecting the faults. But on the other hand, it enables people to select what are the set of trade-offs that are more comfortable with, right? That seems to strike at the heart of another issue. And that's around the concept of a cyber physical system. So it's a term which we use to talk about how the digital world is impacting on the real world and how they relate to each other. And in that space, we're noticing that when people look at a physical cyber physical system, so the embodiment of it, they often have an emotional response. And sometimes it's fear and sometimes it's something else. But when it's more behind the scenes, when it's something that might have a real impact on their life, but it's a series of zeros and ones, it's digitized, that people seem to be more oblivious to it. They don't seem to have any real sense about it or any sense of fear, particularly when it could do something wrong in that space. I suppose one of the challenges in this discussion is the lack of definitions. I mentioned AI before and artificial intelligence isn't really defined, not in a common way and not in a concrete way that we can use in that discussion really well. Cyber physical systems is an area which we've taken at the Institute as being this is a term that we need to really come to grips with. And there's a lot of push around the world. I don't suppose you'd have a stab at trying to give that a definition, would you? Well, I can try to give it a definition. I think, as you mentioned, a big part of it is really being able to connect the physical world to the digital world. I think, I mean, I tend to think of a cyber physical system as one where the physical embodiment of that system is a part of that picture, as opposed to one, you know, what one might think of any of an IOT systems or things like that, where, you know, that could be something that's actually totally, you know, people are not aware of the fact that it even exists, right? So in some sense, you know, it's almost a more specific, right, or a more targeted than in general anything that connects the physical world to the digital world. That's how I tend to think about it. I've actually seen it used many times. I mean, this is one of the problem with these terms, right, is that people use these terms in very different ways. And as you said, even AI, right, I mean, you talk to three different people, you're going to get three different definitions. But I've seen it used to simply say it's something that connects the physical world to the digital world. I tend to think of it as, you know, there is some form of physical embodiment beyond the fact that it's just that. What's really interesting to me is, you know, if you look at people's attitudes, I mean, I would say that this is true that, you know, when you actually have an embodiment that people might have, you know, stronger reactions, certain concerns, things like that. But I would argue that the world is actually changing quite a bit and people are starting to really worry, especially with all the latest stuff that's been happening and with Facebook and so on, right, that people are really starting to worry about these systems that actually don't have a physical embodiment that are actually impacting every single aspect of our lives today, right? I mean, and in the US, that is definitely something that's top of mind for, you know, the majority of the people today. So, you know, it's, I think that to me, it was really interesting about these systems that do have embodiment is more about, like, how do you actually create that interaction? And what kind of emotional responses, right, does it because it's kind of interesting, right? I mean, if you think about a robot and, you know, an embodiment in a robot of some intelligent system, I mean, we know from the research that, for example, children interact with these things very, very differently than if you were, say, doing, you know, the same capability in a tablet, right? I mean, we know that from the research already, right? That it's very, very different. And it's, they start to assign qualities to it that are human-like where you don't necessarily see that type of behavior, even if it's doing exactly the same thing for them from, like, a tablet. So that strikes at another thing, which is very important to this whole topic that we're talking about, is diversity. So the way that people interact with technology depends on their cultural background and who they are and all that type of thing. So I was wondering if you would share your thoughts about when it comes to designing these systems, what we need to think about in terms of getting those datasets right and doing the appropriate research to actually build the tools that will have these good outcomes. Yeah, so that's actually a great question. And I think that touches on so many different points, right? But let me maybe try to address a couple. One, I mean, use the word diversity. And if you think about it, a lot of the, I mean, today, the fact that many of those systems are about, you know, training all of these deep networks to, you know, come up with something or some decision or whatever based on that data, really the ability of these systems to behave in a certain way is all dependent on what type of data have you thrown into it, right? So as we start to build these systems, we really need to be thinking about, are we actually sampling the right part or are we, are we, do we have a reasonable representation of what we want that system to work for? That's not necessarily saying that I want any system that I go develop to work for every single thing, right? Because typically any product that you develop, you will think about developing that product for a certain use for a certain application, sometimes in a certain culture, right? In a certain setting, all of that. So, and because these decisions will impact cost will impact what it is that you're trying to do, right? So I'm, I don't think that the answer is to say, well, everything needs to work everywhere. I think the point that we really need to be cognizant of is that you don't want to design a system where there is an unintended consequence that you haven't thought through, right? If you're trying to design a system that would work in terms of, you know, let's say being able to detect, I don't know, being able to detect that a person is there, right? And you haven't really trained it on all sorts of different people and all sorts of different races and all sorts of different, you know, ages and things like that, then this system isn't going to work if it encounters people that are outside of what you've just trained it for. You know, so that is an unintended consequence, right? Because, you know, you need it to work irrespective of, you know, that constraint. Would you agree with the the saying that algorithms have a country? So if they're made by a particular people, particularly if they're made by a sort of a homogeneous set, so a group of like-minded people of a particular subculture together building something, that that thing that they make, in this case an algorithm, has a particular country. It's come from somewhere, so it's reflecting those views. And then, perhaps unconsciously, it's then reflecting or enforcing those in another part of the world. That's absolutely true. And I think it's, you could think of it as a country, you could think of it as a gender, you could think of it as, in so many different ways, right? I mean, the, the, I think in everything that we designed, right? If we don't bring a diverse set of expertise, race, culture, gender, all of these things, right? Then what you're really doing is you're introducing blind spots. And it's not that, you know, it's what I was trying to say earlier is that we have to know what we are missing. And if you don't have the right people to know what you're missing, you won't know what you're missing, right? So these things will go unnoticed, right? And that, I mean, that isn't just about AI and systems like that, right? If you think about designing a product, any product, right? Even if it has nothing to do with data, right? If you don't bring in enough diversity on the team that's creating that product and you are trying to appeal to a diverse set of users and not the users that look like the people who were designing the product, you're not going to succeed, right? So in the same way, in the case of designing AI systems, that translates to what data am I sourcing, right? And where what am I missing, right? What data am I not sourcing, right? So there is a specific instantiation of that that has to do with AI systems. But I think in general, it's been proven that you can't really get as good of outcomes if you don't bring the diversity of your user space into your design space. What are the enablers for that? What helps you get to that point? What helps you get to that point? Yeah, in terms of having diverse teams. Yeah. I think we have to start to solve, you know, the education system to begin with, right? Because I mean, part of the problem I would argue is I mean, there are so many different problems that I can get into. But, you know, one of the things that, for example, at Intel, we've been really quite focused on is to try to improve diversity. And in fact, our CEO, I think three years ago or three and a half years ago pledged that the Intel population will mirror market availability by 2020, right? And at the time, it was like such an ambitious goal, right? Because, you know, I mean, we were not anywhere near there and there was a big gap, right? So what was interesting was that, you know, once that became the goal, everybody started to run towards that goal, right? And that mobile. So you start to really think about, I mean, the nice thing about it is then you get data to understand where you are, you figure out how far you are from there. And then you start to actually come up with actions to make that happen. What's interesting to me is the last numbers that I saw, it looks like we're going to make this a year ahead of time. Really? Yeah. So that's awesome. But if you think about it, that's mirroring market availability, right? Now you look at market availability, let's look at take a gender, for example, look at gender. So you're talking about somewhere in the span of 22 percent, right? As market availability as comes out of college. So to me, what's really interesting, it's awesome that we're committing to that and solving that problem. But to me, the more interesting problem to solve, once you've solved, once you've metered market availability, is how do you get market availability from 22 to 50 percent? Right? It should matter the population. So that's where, you know, a lot of the efforts around really understanding what's impeding women to get into these fields, for example, and become really important. And, you know, I, I mean, I do look at this a lot. And one of the things that I mean, I was talking to Eleanor about this earlier, but one of the things that really bothers me is that there is this undertone that says, well, that's not women. These type of topics, women are not interested in. Women are not capable of doing math and science and all of that stuff, which, in my opinion, is not sure I want to use that word. Let's say it's ridiculous, right? Let me use the polite word. I think it's absolutely ridiculous to say that. And I think in all of my interaction and my discussion, you know, with girls in schools, with, you know, women in colleges and things like that, is that a lot of these things are about what they are envisioning themselves to be like and to do, right? So I think it falls on all of us to try to bring a very different perspective of what a computer scientist and engineer and, you know, anyone in this field to look like and what type of jobs they do. And it also to me what's really interesting is that what we're realizing is that what's really important is that transdisciplinary approach, right? So if you take that, right, we know, I mean, in many of the characteristics that women do tend to do quite well actually in working across disciplines, right? In bringing people together from different perspectives. So that becomes a really key thing, right? So it's like as we move towards that world where we're trying to solve ethics and AI and all of these different problems. How do you do that? If you don't bring that diversity of disciplines and expertise and skill set to the picture? It's also more than a pipeline issue, though, isn't as well. There's some other issues that need to be done in order to foster that diversity. Could you speak about that? Say more. So with respect to the pipeline that you were talking about, helping students understand from an early age that this is a career option for them. My understanding is that whilst that is part of the issue, and is an area that is often focused upon from policy and other spaces, is that once we start to try and get diversity in the workplace, there are certain factors in the landscape, as we know it today, that is adverse to that. Oh, absolutely. I mean, there is there's no question to me that there's tons of bias that, I mean, I could go on and on forever. But like there, I think there is this assumption of what is acceptable behavior, right? So I think it's it's really kind of interesting. If you think about how men tend to think about what a role that a woman should play, I think there is that somewhat disconnects between how they see women in the home and how they see women at work. So as a result, you tend to always hear one of two things. Either women are not aggressive enough, right? They're too timid, they're not aggressive, blah, blah, blah. Or they're too aggressive. And, you know, because it's just, I mean, it's really this bizarre thing as like it seems like the appropriate behavior that they expect women to have is a very, very, very thin slice, right? And in fact, you see a lot of the data that shows that a lot of the complaints about women in terms of the, you know, what do you call it, like reviews and things like that tends to surface around behavioral issues, right? And for men, they tend to be around skill issues and technical issues and all of that. So like you have to ask yourself the question of and most of these reviews are being done by men, not by women, right? So so if you think about that, I mean, like, there is definitely a lot of these biases in the way that people think about it that are very unconscious in nature. I mean, a lot of people have unconscious bias. There is no question about that. But to me, this is why, you know, I was having this conversation with Genevieve and, you know, one of the things that I believe is while we talk about algorithmic bias all the times, like, oh, my God, it's important to actually train systems that are, you know, in ways that that data is representative of the world. In some sense, there is a nice thing about being able able to reprogram something to not be biased. And I would argue it's much harder to actually reprogram humans, even when they know that there is a bias, because by definition, if it's an unconscious bias, it's very hard for them to see it in the moment. So that's a great point. I just want to underline or underscore is around data sets and training of algorithms and AI. And so we're seeing already that for job candidates that using historical data sets is not always preferable in its ways because you tend to get more of the same. And that goes in bad places. And so if we start to see more of these decision making tools brought in into circumstances like you were talking about, if we're not wary about what we're doing, if we're not mindful about how we're applying that, we can amplify more of the same countless. Absolutely. I mean, it essentially the bias that exists in the world will get reflected in your data set and you'll re you'll amplify that bias over and over again. The nice thing about having these systems is that you can actually do all the analysis that you want to understand that there is a bias, right? Whereas if it's just based on human judgment and whatever, right? But then it's very hard to even quantify that bias, right? Because you could say, well, why is it the fact that we get, you know, that percentage of women into these jobs versus men? Well, one could argue that, oh, well, maybe the woman that we got didn't have the right skillset or whatever. If you actually have all of the stuff being done in an, you know, some form of a system, you can actually query that system to try to actually prove out some of these assumptions, right? Which is very hard to prove out in people's brains, but it's much easier to prove out in a system. So in some sense, you want to use it diagnostically, right? You want to use these systems to understand where there is bias, you want to then take that into account when you actually design these decision systems, right? Because then you wanted to actually compensate for these biases. Yes, I'm glad that we've found a sense of hope in that part of the conversation. I am an optimist. So that's good. I think where I'd like to land is to really draw out some more hope and talk about what excites you about the future with respect to technology. So where do you have hope? Where do you think things are going to go that are going to be really good? You know, I have a lot of hope for technology in the future. I think a big part of my hope for the future is that technology can help with equity. I see technology as a place where, as a thing that can be totally utilized to level the playing field, whether it's about economics or whether it's about physical ability or whether it's about cognitive ability, right? I mean, if you think about it, there are all of these potentials for these type of technologies to come in and help augment whatever gap there is from a human capability perspective. And that to me is really the biggest hope that we have for these technologies. And I think we sometimes, and again, I am an optimist, not to say that we shouldn't be wary of technology because it could always be used to do the opposite, which is amplify the inequity, right? But it's really up to us to make it turn out into whatever way that we wanted to turn out, right? And it is a tool that can really help bridge a lot of these gaps, right? Because it changes the way that it changes everything. It changes. It's so disruptive in nature that you could actually start to think about how do you use it to actually improve that equity in your society, right? So that is what I'm really mostly excited about. That's excellent. I think that's a great place to leave it. Lama Nakaman, thank you very much for your time. Thank you.