 I'm Eleanor Huntington, Dean of the College of Engineering and Computer Science at the Australian National University. It's with great pleasure that I'm joined today by Lama Nuxman, Intel Fellow and Director of the Anticipatory Computing Lab at Intel Labs in Silicon Valley. Lama's work centres around creating technologies that gain a deep understanding of people through sensing and sense-making and then act on that context to help them across many aspects of their lives. Lama has spent the last decade at Intel shaping the future of predictive computing. Where devices powered by artificial intelligence can learn and act on their own to assist people, industry and societies. In 2012, Lama led a team of researchers who developed a new innovative software platform and sensing system to assist the late Professor Stephen Hawking communicate. Lama's work pushes the boundaries of what computing does to enhance human experiences, which is exactly what we're doing at the ANU through the reimagination of engineering and computing. The reimagined project is ambitious in that it will bring together people, technological systems and science to resolve highly complex global societal challenges. So Lama, what applications of context do we design in computing do you see impacting society? So, you know, frankly there are a lot of applications, but that we maybe dig into a couple that I think are really key, especially in the near future. One is around the area of more personalised education. So one of the problems that we see in the US, and I'm guessing it's probably similar here, is that especially for K through 12, the difference that you see between public schools and independent schools is huge in terms of outcomes. And a lot of the research in that space is showing that having more personalisation and smaller classes, right, ends up making that big of a difference. So one of the things that we've been trying to look at for public schools is can we actually bring more technology into the classroom where we can actually personalise that education? And can we actually bridge that gap without necessarily having to incur all the cost of making those schools, you know, this classroom smaller? That's really interesting. So actually there's just been a report come out in Australia only last week that talked about exactly this idea and the idea that we actually wanted to go away from large classroom standardised teaching in schools and go towards more personalised teaching. Exactly. Yeah, so if you think about it, right, you know, a technology that's around you, that's ambient in the environment, that's actually understanding what you're struggling with, where, you know, you're engaged, where you're disconnected. And then tailoring that both to bring feedback to the teacher, but also to be able to, you know, change the content on the fly and things like that so that you can, it can keep you engaged. Okay, so how much of that is, is about understanding levels of attention in the student and how much of that is relying on historical data about the way that previous students have actually gone through this little process? So it's actually both. So the first part is really trying to understand engagement level, because engagement tends to be a really good signal for, you know, how do you actually tailor the content and understand where they're struggling. That's really cool. So that, so being able to do that both from like facial expressions and, you know, some physiological data, things like that, right, that's, you could actually show a lot of correlation between that and engagement level. Okay, so less about adapting the content, what's in the content and more about raising a flag and saying, okay, we're drifting away now. Yeah, and you could do both, right? I mean, if you have different streams of content, you can actually automate that. But it's really also important. I mean, we think about it a lot in terms of how do you empower the teacher to be a better teacher? Right. So it's not just about taking the teacher out of the loop, but really keeping the teacher in the loop. That's really cool. That's really exciting. Yeah. Yeah. So, and in terms of that, does that mean that at some point it will be possible to actually deliver content to students in different order, in different sequence? Absolutely. Absolutely. And we've been doing actually a lot of studies in classrooms over the last five years, you know, initially to try to understand where can technology help, because one of the things that we always have to be worried about with, you know, trying to bring solutions, you know, to a specific problem is that we really need to understand where we help the humans versus, you know, the tendency to just say, oh, we'll just replace the human. Right. So we really, we spent more than five years doing a lot of research in classrooms, in public schools, in Turkey, actually, one of the places that we've looked at, to really try to understand where is the gap and how can we just augment that capability. Okay. So how do you actually do that study? How do you find out what you need to know in order to build that technology? So we have, you know, a very multidisciplinary, right? So we bring in, you know, user research, design research. We do a lot of early ideation and, you know, Wizard of Oz type things, right? Then we go and we collect a lot of data more ambiently, right? Then we try to build a lot of these models, you know, behind the scenes and then take them into multiple deployments to understand how accurate are they, you know, and ultimately, are they actually translating to the right outcomes? So it's really, I mean, one of the problems of doing this type of work is that it requires you to be in the space where that is needed and it's not really easy to get deployments into actual classrooms. But I mean, the initial feedback that we've been getting from teachers is actually quite great. That's awesome. Yeah. So you send a team of people out to sit in a classroom in Turkey, watch how the students are engaging with the material, how the students are engaging with the teacher, how the teachers are engaging with the students. And it's important to talk to all the stakeholders, right? Because the stakeholders are not just the students and the teachers, it's also the parents, you know, and so how much do you have to think about environmental factors? Like, for example, there's quite a lot of research that shows that if a young kid doesn't have breakfast then they struggle to concentrate. Do you have to think about that? So that's actually a great question. And this is why I'm really, if you think about the context that we're computing, right? It's really important to think about that classroom experience that extends way beyond the classroom. So the whole day, right? How much sleep did they get? So we're actually doing tons of work on trying to understand sleep, understand the activities. So the full context? Exactly. Because I think that's really the only way you can start to decipher. If you're looking for causation, right? It's really the only way that you can start to decipher all of these different elements. And when you go to the real world, there's so much mess in the real world. So you need to be able to have even more context to understand that mess. So in that context, so when you go out and do some fieldwork on a particular pilot site, then that's going to be very specifically grounded in a cultural location, socioeconomic demographic, all of that sort of stuff. How do you then work out whether or not you can generalize what you've learnt to different cultures, different locations? Yeah, so that's a great question. So a lot of times what we end up doing in a lot of the ethnographic research is that we do go to different places across the world. So we don't usually, we try to understand what are those differences and where would those differences imply differences for the technology? Because sometimes even the sensing needs to be different. Sometimes what you do about it would be different. So where would you actually then do the action might be different. But yeah, we tend to go all over the world to actually try to do our research. And so when you develop a product, does that mean in the end we've got this dystopian big brother thing where the product is watching how much you've had to eat for breakfast in the morning before you come to school? So I think that's really one of the, I mean if you look at these type of technologies, they have a huge potential to really make people's lives better. But really if we want to think about how we can do the right thing from an ethical standpoint, one of the things that are extremely important is to understand what the privacy concerns are and how these systems are really helping you, but they're not being exploited, right? And so for example, we look a lot at how can we extract information that we need but not more than what we need, right? How do you push the computation to happen as close to the sensing as possible so you're not opening a channel of communication that might leak other information? That's cool. Another big important part that I think as we think of AI in general is how do you actually bring more transparency into the system, right? How do you explain these actions, right? Because it's not, I mean, I just mentioned students as an example, but one of the issues that we have with, for example, elderly and aging in place is another area that we're really, where we think context and sensing can bring a huge value. But one of the things that we know when we start to deploy these things in people's homes, the first thing that people do is we'll turn off cameras, right? So how do you bring in the context that you need without necessarily even having a camera? And if you have a camera and what it's actually figuring out is only certain actions rather than the actual RGB picture of somebody, how do you give the person the ability to know exactly what is being captured, right? Because people have a lot of concerns about their purpose. So that's an area that we spend a lot of effort in. That's really interesting. So computing at the edge is more than just about speeding things up and reducing load on the network. It's also a strategy for privacy. Oh, that's really cool. That's really interesting. So it's interesting that people are so much more sensitive about cameras, but microphones kind of a bit more relaxed about that. It's actually really interesting. So one of the studies that we're doing now, and it's across different countries, is trying to understand where is that sensitivity? Because what we're finding out is the sensitivity to certain modalities is very much dependent on where you put those, right? And so I've always wondered about this, right? Because one of the things that I think about is, on camera in some sense, you can understand if it's not in certain places, right? For example, it's not in the bathroom, right? But the problem with audio is that it's revealing a lot more, right? Exactly. So then so that's what we've been trying to find out is like, where are those sensitivities that are obvious things that are coming out, but there are actually even some more subtle things that are very much cultural in nature as well. So for example, some of the work that we've done in Germany indicates that people are actually more concerned about other people that come to their homes being watched because it's like, you make the decision to put this thing in. But what about people who just drop in? You're not getting their consent, right? So it's like there are all of these different things that are actually showing up out of that research. That's really interesting. So there is at the moment the beginnings of breakdown and trust around software in general because like all really useful pieces of technology through history, it's kind of just escaped into the wild. And people have accepted it into their lives and now we're only really just kind of understanding how much it's penetrated our lives. Exactly. Where do you think that trust issue is going to go? So I think, I mean, trust has really multiple facets to it, right? I mean, part of it is what I was talking about in terms of explainability and transparency and all of that. But part of it frankly is also about when you start to give up control, can you trust that the thing is actually going to do what you want it to do? And in fact, we see this in everyday life with people. I mean, not just machines, right? When you have a new admin, for example, right? The more trust you gain and that's over time, the more you're willing to delegate, right? So I mean, I see the same thing in terms of that relationship with technology. How does it explain itself? How can you start to build up that trust with smaller things and over time, A, it knows more about you, which would enable it to be more tailored and personalized, but then you actually believe more in its capability. So you start to be more comfortable with the language and more and more of that trust. Okay, so that which leads into a lot of this stuff around the right to explanation and explanation of AI in general, because we all know that one of the major issues about AI these days is that it kind of gets it right, but nobody really knows exactly how. So that's going to lead us down a very interesting path because as you say, we're going to build up trust over time because it's going to start to do some things that we believe that are repeatable of the rest of it. And then every now and then, something a little bit unusual is going to happen and we're not going to know why. Exactly, and that's really why it's really important in the design of these algorithms to be able to actually understand why a certain action is done. And we're seeing a lot of trends in terms of whether, for example, let me talk about autonomous driving, right? It's possible technically to train a system that really just mimics, takes the sensor input and controls the car. But if you think about how these systems today are being built, and actually, Amna and Shashua had a really good presentation on that topic specifically, that's not what they do. What they do is they build a system of many, many different components, right? And they try to isolate these things. And the reason for that is you want to be able to understand each part of that system and what's the perception system telling you? What is the control system telling you, right? Because that enables some sort of explanation. Exactly, exactly. And we're seeing also some new, some trends that I'm really quite interested in, for example, this probabilistic computing, right? So going beyond the unexplainability of deep learning to try to bring more explanation, for example, into the marriage of deep learning and probabilistic computing. Yeah, and then ideally that gets us back to some degree of explanation and trust. Exactly, exactly. Because I'm sure I saw some research recently that said that folks, even though the probability of an accident in an autonomous vehicle is quite soon going to be less than the probability of an accident driven by a human being, we as people are going to be much more accepting of accidents caused by people than accidents caused by software. Exactly, and all of the systems that we've built out around insurance and liability and it's much more even than just science, right? There are all of these things that you start to need to rethink what it means when actually it's a car and who is responsible. Exactly, yeah, and that's gonna get, I mean, it's already embedded in our legislation as well. So it's all just kind of a bit, it's an interesting time, we're living in a living lab right at the moment. Exactly, yeah, exactly. So tell me about what you think are they gonna be the major trends in engineering and computing over the next little while? So I think, you know, given the fact that autonomy and assistance are really areas that we're seeing the most amount of change in, if you think about the type of disciplines that are needed to bring together to make that happen, right, I mean, you're looking at anthropology, you know, ethnography, design, computer science, policy, right, I mean, humanities. So it's really, to me, it's really impossible to think about these problems if we keep these things totally separated. And in fact, one of the things that I've done, for example, within my lab is it's, you know, a team of 55 people that's totally multidisciplinary and it takes really bringing all of these people together to actually solve these problems. But every time you do that, when you don't start that from the beginning in the education system, it's really hard. I mean, people, like they don't speak the same language even, right, so spending all the time to just translate and get people comfortable with each other is a very inefficient way of getting there. So contrast that with why don't we have a discipline that brings all of these different disciplines together, right? Yeah, yeah, so how much effort do you have to go to at the moment to get all of that communication actually going? It's, I mean, in my, so in my team, specifically, when I built that, you know, it took me more than a year to try to make sure that the right people are embedded in all the right conversations and all. And the beauty about this is once you do it, then people see the value, right? But it takes some time because they speak, literally they speak different languages. They speak past each other, like, okay, and then they're missing these things where like at the end of the day, somebody makes a decision you look at it's like, why that doesn't make any sense, right? So I think, you know, I contrast that, for example, with, so my son actually goes to an Italian immersion school in San Francisco and it's a, you know, it's a transdisciplinary, you know, regio, IB combination, right? And it fascinates me how they, it's really everything is about transdisciplinary, right? And it's really fascinating. I was like, it's almost like a second nature to him, right? And he's 11. So like, that's how we should actually build these disciplines. So when you talk about transdisciplinary, does that mean that you think we're going to end up in a place where everybody has the same skill set or do you mean by that that there are going to be people who will be experts in particular areas but who can communicate in a transdisciplinary? I think it's, I mean, you will always have different expertise, you know, in different areas, but I think there is a level of knowledge that is needed across discipline and there needs to be a practice of working across disciplines. And both of these things are indeed, right? Yeah, so, yeah. And so, I mean, one of the things we're talking about here is how do we make T-shaped qualifications and T-shaped people so that this does exactly that. Exactly, yeah, absolutely. And so that, one of the things that we're discovering is that that's partly an outlook thing and partly an expertise thing. Yeah, exactly. Absolutely, spot on. Yeah, so when you were building your team, were you looking for outlook and expertise or just expertise? I was looking for both, for sure, yeah. And it's, you know, it's, I would say I was more successful in some areas, more than others, but it's really, I mean, it almost becomes over time a way of life. Yeah, okay, well, that's really cool. So in that sense, it sounds like what we're all gonna have to do is work out how to create the next generation of folks who still retain expertise in a discipline and who on top of that are willing and able to collaborate respectively. Absolutely. Respectfully across disciplines. Absolutely. Wow, okay, that's really interesting. So how many years do we have to get this right? Yeah, I thought you were going to say that. I mean, seriously, I mean, if you think about all of these different spaces, I mean, whether it's, I mean, it's funny because we seem to be talking a lot about autonomous driving and because that's kind of like the most obvious example. But frankly, there aren't a lot of things that you can do today where you don't touch some sort of autonomous system, even if that is not physically autonomous, right? And it's already being used everywhere, right? It's being used for sentencing and financial transactions. And there are all of these decisions that are being made and you can't imagine building these systems without having all of these different expertise come together. So that, I mean, I think it's timely and it's in some sense almost late. Yeah, yeah, okay. Right, so we better get moving quickly, okay? Exactly. So we're on the right track. And one of the things I often say is that engineering is actually mostly about technological trust at scale. And I mean, that's one part of the conversation we've been having for a while. Exactly, totally agree. So we can't end this conversation without a discussion about women in STEM. So how important do you think it is to be a role model for young women in science, technology, engineering and math? I think it's absolutely important. And I spend a lot of time talking to girls and in schools and then in college and even just kind of coaching within Intel. And one of the things that really bother me to a deep level is that a lot of times people would want to explain the fact that we don't have a lot of women in STEM to mean that women are not interested in technical areas. And I can't tell you how wrong that is. Because if you look at, I mean, take medicine as an example, there we don't have that problem. And it's not any less technical overfield. I believe that the problem is engineering is being positioned in a way that women don't see themselves in these careers rather than they're incapable of actually doing that work. And the more I go out and talk to people about what it is that I do trying to get people excited about what really matters to them. Maybe it's not the video game but it's actually changing society with engineering. And then you see a totally different interest. So it is really important for the word to come out there to actually start to imagine engineers that look very, very different. And in fact, it's not like when I think about diversity in general, I mean to me diversity is about so many different aspects of life and it's not just necessarily about gender, right? But it's something that I think we cannot be successful as a society if we don't bring in the perspective, all of these different diverse perspectives. And our life experiences bring a lot to designs of systems, to our work environment, to all of these things. Oh, indeed. I mean, we are building the world around us. And if the folks who are building that world are not representative of the world that we're trying to create, then we're just not gonna get it right. So picking up on that motivation issue, one of the things that I've been really pleased to see is an increase in the level of discourse around issues to do with STEM. One of the challenges I see coming out of that is that folks are now using STEMs and acronym but they're using it as a word. And it loses some of the differentiation because science and maths are about, they're skills to be mastered and they're about the joy of discovery. Technology is about things that we make and engineering is about solving human problems using your mastery of science and maths to build technology. And that means that each of those letters comes with a different set of motivations. So is that your experience when you go out and talk to? Yeah, because I think, this is why I was saying that if you think about it from an expertise perspective, I don't think that that's where the problem is, right? But it's really, what are you applying it for? And I see a lot of people being motivated by very different things depending on where they come from. So it's really a question of how do you tap into that interest and motivation and make sure that you tell that broader story rather than what we think of as what picture you're gonna come out when you put into Google, engineer, for example. It's usually a yellow hard hat. So if you look up the images, it's almost always of a young man wearing a yellow hard hat. I don't understand the yellow, but anyway. Exactly, yeah. So what's next for you? What is next for me? So it's actually, so it's really interesting. In the beginning when you were talking about the introduction, you mentioned the work that I was doing with Professor Stephen Hawking, right? And one of the areas that I'm really, really passionate about is how do we bring these technologies to assist people who really need independence but are unable to actually have that independence? And one of the things that I'm really excited about, if we think about a lot of the automation that happens today and the ability to really control the physical world just by having access to a computer, right? We look at it as, oh, it's a convenience thing, right? I could make all of these things happen when I'm outside of the house and things like that. But if you now think about being able to bring people who have disabilities, who cannot actually physically have these controls, the fact that all of this technology came in to actually enable that for our convenience now can be tailored towards people with disability and enabling people to be much more independent. So we're taking on actually the work that we have done with Professor Stephen Hawking. I mean, we've taken it already to open source, but we're actually building more and more on that technology to enable people to really just connect to the physical world. So the democratization of technology and the open sourcing of a lot of the hard work that you've done now, you're hoping to allow a whole bunch of other people to do that. And it's really important because the problem with assistive technologies in general for people who have disabilities is that it needs to be tailored very, very specifically. So it doesn't really benefit from the economies of scale if you think about vertical solutions. However, a lot of the components can be reused. So one of the things that we wanted to do with this whole approach is if we put that into open source, then really the amount of effort that's needed to tailor it becomes much, much lower. And now you've enabled a whole set of people to be able to actually have access. And so it's about integration and systems engineering based on your platform plus commercially available democratized technology. Exactly, that's really cool. So one thing that we're actually adding right now is actually looking at brain computer interface. So at least in the case of Stephen, he was able to move his face. Some people are totally locked in. So we're actually trying to capture brain computer interface and brain waves, EEG signals, and use that to actually control that exact same system. Yeah, wow, that's tremendous. And so on that note, thank you very much. We really appreciate you visiting the ANU and enjoy the rest of your visit. Thank you, thank you.