 My name is Bert Chee. I'm a professor of electronic and computer engineering at the Hong Kong University of Science and Technology. I'm also the director of the Center for Aging Science there, where we bring together faculty from all across the university to address issues in aging from the social, economic, scientific, medical, and technical perspectives. My own personal research is in the area of video-based measurements of human behavior like facial expression and gestures. And how can we use these to build better human machine and human robot interactions by understanding better the intent and underlying state of the humans the robots are interacting with? My name is Pascal Fung. I'm the chair professor of electronic and computer engineering. I'm also the director for Center for AI Research at our university. And this is a center that promotes multidisciplinary research, in particular we're talking about today, which is AI and Robotics for Health and Smart Aging, so we collaborate. And my own research area since 30 years has been in the area of conversational AI and speech and language processing, which also includes using speech and textual signals to analyze and detect physiological measurements in people. And Bert and I have been collaborating since 1997. We've worked a lot on many different kind of projects in the area of human computer and human robot interactions. And today, we're going to introduce to you three projects that we're currently working on for Smart Aging. Yeah, I very much remember that first paper that we worked on on speech recognition way back in the late 1990s. And so that was really the spark of a great collaboration between the two of us. And what we'd like to talk about today is really a area of intersection between our two centers, one on AI and then one on aging science. And the problem we're addressing is really that we have a aging society. I think this has been well publicized. And of course, on balance, this is a great thing. It's reflective of the fact that we're living longer, we have longer lifespans. And it's also increasing the pool of knowledge, expertise, and wisdom we have in our society. But of course, we all recognize that there are also challenges that come along with this. And those challenges are driving us toward a vision of more integrated, more seamless, and better quality health care for the elderly. And this is gonna have a lot of different benefits. For example, number one, we're going to have better monitoring and support for the elderly in the community. Number two, we're going to have more and more information where we can use to generate insights into the elderly that can generate better health care and faster health care. Number three, we also have to think about the systemic effects and how can we alleviate the workload demands on our health care workers. And number four, how can we take all of these disparate systems and integrate them together to provide a seamless state of care that will go across the health span trajectory of the patients by passing information successfully from one system to another? What are the kinds of research problems that we're looking at and what are the technologies that might be involved in that? Yes, indeed. At the Center for AI Research, we actually work together across departments and schools with our roboticists, our computer scientists, our life scientists, and even our social scientists to look for solutions for the aging society. So we cover application areas from home care, community care, to hospital care, and we use AI for things such as precision medicine, smart diet, conversational AI for virtual companionship, for even speech and gaze and computer vision to detect negative emotion in the elderly or signs of dementia. We use robotics for navigating, for mobility assistance, and we also use AI hopefully in some kind of consultation and diagnosis scenario. So there are a lot of areas that we're covering. And for me, this is actually personal because six months ago my mother, who's in her 80s, had a stroke and by some fortunate coincidence, I arrived at her place to have dinner with her within half an hour when she had a stroke and we called the emergency service and they rushed her to the emergency ward. My mother lives alone because she's very independent and she insists that she doesn't need to be taken care of all the time. She doesn't want to have another human being in her house all the time. And one month prior to this incident, we actually had a conversation about this because I want to hire a full-time helper for her and she insists no, she doesn't need it. However, she said she wouldn't mind having a robot at home 24 by seven and I was surprised and why mom? And she said because a robot is like a microwave, a rice cooker, a refrigerator. She doesn't need to take care of the robot and she doesn't feel that her privacy would be violated. So when she had that stroke, I was thinking what if there's nobody there within the three hour golden window to save a stroke patient and a heart attack patient, right? Wouldn't be nice if we have a robot who can monitor these elderly people living alone. Maybe someday it's gonna be me in 20 years. So in any case, we then so rushed her to the hospital and this led to our idea of building a robot at home for the elderly like my mother. Yeah, I remember that experience that you had and I can't believe how incredibly lucky both of you were for that time to occur but how, what if it was possible if we could actually remove those serendipity? And I think that that would not only have to provide better care for your mother but also relieve a lot of your concerns as well. Exactly, so this is my mother. She recovered from her stroke in the ward and so we got a very generous donation by a very generous person and we started this project called Care E. It's a home care robot. I'm sure you have heard about home care robots for many, many years now and none of which is really exceedingly successful and the reason is that we actually do not need all kinds of fancy robotic functions. What we need is the robot that can administer medicine daily, bring the water to my mother and the robot that can monitor 24 by seven whether there's any sign of, whether there's a fall or whether there's any sign of distress so by using a computer vision, gaze detection or even initiating a conversation with the elderly person so that the robot can detect whether something's wrong from the speech. A stroke patient has flurry speech. That's the first sign I saw. I detected when I called my mother. So, and also such a robot can assist my mother to go to from bedroom to bathroom sometimes, right? Using autonomous driving techniques but within the home environment and this robot would also have this kind of, you know, infrared detection, motion detection and last but not the least, my area it will have a conversational AI system where it will be able to initiate conversations with my mother to keep her entertained sometimes but also at the same time to detect whether there's any sign of dementia and so on. And this system can also establish a communication channel between my mother and me if necessary or a communication channel between my mother and the caregivers remotely and that will indeed relieve a lot of my stress. So many aspects of this is enabled by AI Robotics and we are working actively on this. But I also noticed that when I sent my mother to the, when she was rushed to the emergency ward the nurses came over and asked her many questions to see whether she was having signs of, you know, stroke or dementia or anything. And I saw this nurse going around asking all the patients the same question. Some of them are conscious and some of them not. And then the neuro doctor we were seeing was calling remotely. So there was a clear need for more help, more assistance to help patients like my mother even in the hospital. So Bert, do you think we can do something to help the nurses and the doctors? I think you've really identified an important problem. We have to recognize that these demands on the healthcare workers are expanding. In fact, a recent survey said that doctors and nurses felt that they were spending too much time on administrative tasks. In fact, another survey followed up and actually measured this and found that 44%, only 40% and 44% of the healthcare workers' time was actually spent working directly with patients. And I'm sure that these kinds of administrative things are very important for ensuring the quality of care but they probably weren't the original motivation for people to get into the field. And in fact, the World Health Organization has identified motivation as one of the key ways that we can bolster the quality of our healthcare system. And so the question we asked ourselves was, is it possible for AI and robotics to kind of help manage this workload for the healthcare workers? And so in order to address this problem, we actually assembled a tripartite team of not only researchers within HKUSG, not only from engineering, but also social science, but also a private company within Hong Kong developing humanoid robots, like the one that you see over here, as well of course the context providers, the hospital. And so we arranged these discussions where we are important to recognize common understanding about what the needs are because what the hospitals can provide is what are the actual needs? What a company can provide are what the actual capabilities are available in the market and what the discussions can do is identify the gap between these two. Now once we're here at the university working on the AI technologies, we can try to work together to close that gap and this is really what we've done through these discussions. And so we've identified two use case scenarios for the robot. One is the one that you see here was interacting with a nurse playing the part of the patient here. And that's an interview called the abbreviated mental test designed to explicitly assess for things like confusion that Pascal talked about before. And this has to be delivered to every patient but what you can imagine is that this robot that we've developed can deliver these tests and identify those patients who are most in need or perhaps most at risk of being confused and identify those privately of course to the nurses who can then pay more attention to those patients than allowing them to pay actual attention to this patient rather than going around and actually administering this test over and over. The other thing we've identified is patrol test because you may know at the nighttime the wards are very loosely staffed but at the same time the patients are still there and we need to know whether there's getting out of bed or at risk of falls and so we're going to have the robot here patrol around the hospital in order to detect these kinds of dangerous situations. You might ask, well, isn't it possible to put cameras in the room to address this problem? But the issue is that what we need to do is prevent things that are happening. So if we detect, it's not enough. We need to actually intervene and so you can imagine the robot coming over to the patient and using all the cues available to her the same human cues that we have to say, wait, a nurse is coming, please wait here and someone will come and help you very soon. But I think the hospital is just one issue because healthcare doesn't end at the hospital and so once your mother went home I know that she had to have a lot of follow-up things and is there a way that AI and robotics can help in this case as well? Yeah, indeed. So when my mother was discharged from the stroke war she went home and luckily, you know she recovered quite fast but she had to be followed up by community workers, speech therapists, and a physiotherapist. So she sees them and sometimes a community worker will come but it's once in a few weeks because they also told me that they just don't have enough people to help with these patients at home. So then we thought of the solution that we've been building which is a virtual assistant, virtual health assistant, which is a conversational AI system. You see here, her name is Nora. So she will have a conversation with the elderly person and have a daily conversation and then there's from the speech and the official expressions and case detection, she will detect the mental state of the elderly and she will monitor over days or weeks depending on what a doctor would like to see. And so the healthcare worker can set up the system and then can collect this data when they visit her next. And this system we also piloted during the pandemic at one of the quarantine hotels in Hong Kong for people who had to go through 14 days of to 21 days of quarantine. I was one of those people. So I got to talk to her every day to tell her how I'm doing and if there's any dangerous distress signals from anybody then the system will alert the front desk or the emergency and so on. In fact, the system will also lead the person who's using the system to do daily exercises and daily meditation and yoga and so on. So this virtual assistant is a dream come true for me since I've been working on conversational AI since the late 1990s. And this is really the first time I feel like our technology can really help people, especially in this case, my own mother. Now, however, I mean, you probably know about chat GPT and other kind of chat bots today. And this is really the time of the new generation of conversational AI technology. And there's however a lot of challenge laying ahead for us. For example, these kind of generative AI systems, they are very good at conversing in open manner. Okay, you can talk about anything you want. It will give you some kind of very human-like response. However, today we cannot use them to build systems like this for elderly people or patients because today's generative AI systems are still not controllable in the sense that they might generate answers that are simple hallucinations, meaning no factual information and inappropriate answers. So the research goes on and it is very challenging for us today is focused on making the systems not just more empathetic, but also even more importantly, safer and with factual knowledge. And there will be a panel later on where we talk about these challenges. So there's a lot of challenge going on and then we are really into, since we've been collaborating all these decades, we are entering into a new era of AI and robotics for smart aging. And Bert, there must be other kind of challenges in smart aging or aging science. Yeah, I think as I mentioned before, aging is certainly interdisciplinary or multidisciplinary, but I think what our projects have really demonstrated here is this multi-situational. And what do I mean by that? It's situated in different contexts. Those include physical context, sorry, physical context, social context, economic context, and as well as temporal context. And so physical context, what I mean is of course the home and the hospital. And what we see there is that we've used different types of forms of robots. So they're not all the humanoid robots, but sometimes they achieve different forms. So how do we match the physical form of the robot to the needs and to the environment? Temporal context, what we've talked about is three different projects addressing three discrete points in the healthcare trajectory. But like I said, what we really need to do is take these ones and integrate them together so there's a systemic knowledge within the system that allows the patient to hand off from one to the other seamlessly to provide for better care. When we talk about social context, we need to remember that it's not just the patient-robot interaction that we've kind of talked about here, but it's actually how that integrates into a team. A team that consists of doctors, nurses, family members, friends, and all of these working around the patient and how can we integrate robots into these teams? And the final thing is of course economic. Right now these robots, especially the human right ones that we talked about are quite expensive. How can we lower those costs to provide the benefits to a wider range of people? So for developers of AI technology and robotics technology, we actually need your help. We need the multi-stakeholder collaboration. We need the feedback from the experts, medical experts, health experts. We need the feedback from people who have elderly patients at home like me. And we need the feedback of social scientists. And we need to work together collaboratively to build the next generation of AI and robotics for smart aging and for our future society. So thank you for coming to listen to our vision for smart aging using AI and robotics, and now we can take your questions. Thank you. Yeah, right here, and then back there. You spoke about the cost of human-eyed robots and getting that down. So can you give a sense of what those numbers look like and what do we think those numbers look like in three to five years when we think that will become more realistic from an implementation perspective? Yeah, I mean, I think right now the ones that we're working with now are probably on the order. I don't want to say too much, but on the order of hundreds of thousands of US dollars. And so I think that the issue will really be in identifying those areas where we really have a killer case study that will drive wider adoption. And of course once that happens, then the price will come down. Yeah, so I think we demonstrated their multi-path to robotic systems. We show human-eyed robots, but also the very minimalistic functional robots. Robots on wheels that can help my mother at home. Doesn't need to be a humanoid. And then we also showcase the virtual robots, like a virtual assistant that has a computer interface. So I think we have many paths to success and that is really the future, I think. Yeah, back here. I think right here. Yeah. My question is what techniques are the university exploring to mitigate these non-factual hallucinations? Well, thank you for the question. That is one of my core research areas. I think I've been speaking about making AI more responsible and safer. Since I came here in the, you know, 2015, I think I was challenged by audience here. That, you know, the kind of AI you're building, how can it be really beneficial to humans? So what we are doing in the universe is we're doing a lot of core research on the algorithms. For example, how to mitigate hallucination in the training stage, in the database stage, and in the inference stage. So when the system is responding, how it can respond with self-reflection and self-constraining. So today, all these generative AI systems have kind of a safety layer that's built on top of it. You've probably heard about OpenAI's reinforcement learning with human in the loop, which is to basically add a human in the loop to have some kind of filter. And Chagivity and these systems will refuse to answer medical questions, for example. So that's, but that's kind of a patching, you know, safeguard, it's a guardrail, which is necessary, but not sufficient. So universities as research institutions, we're working on making the core algorithm more controllable and more aligned with our safety requirements and human values. Thank you. My question is, as you develop these robots with all the experts that you mentioned, at what level you're developing something that can be used across the globe? But at another level, how do you make sure that you're contextualizing within the cultural ecosystem to make it more, you know, attractive to the user? And this is not just about having a sociologist on board. How do you incorporate the local context and local culture? Yeah, actually that was very important, especially for the hospital context that we were talking about, because like a lot of the prior implementations of the robot that we're using were actually in English. So one of the key things was, can we get Cantonese into the, which is the local language in Hong Kong, into the system? And I think that that's why we're working also with social scientists working in language to make sure that we're getting the cadence, right? The, so that the communication is really set up, because I think when you talk about the local culture, you're, and especially robotics, you're really talking about an interface between the two of them and making sure we have that understanding. So that's why we need to pull in together these multi-disciplinary aspects. Yeah, I think I mentioned earlier, my mother said she's okay with a robot at home. And that kind of looks like a refrigerator, home appliance type of robot. So we're not putting a humanoid in her home because that will spook her out. So, but something that looks more friendly and more like a human appliance. I mean, in Asia, we already have robots delivering food in all the restaurants. So it's very common sight already. So culturally and contextually speaking, it is very feasible to build this kind of robot in the Greater China area. Another reason is that the homes in Hong Kong as well as in Greater China tend to be on the apartment, right? They tend to be, there's no staircase that the robot needs to go up to and so on. So we take all that into consideration and we try to build the, you know, minimalist, we take a minimalist approach to building this kind of home care robot. You had a question? No, no. The gentleman in the front. Okay, sure. One of, hello, good morning. One of the recent technologies that is gonna be discussed here is also the development of medical technology that is flexible neural, neural electronics or sensors. Have you been able to do any testing with any senior citizens by wearing the neural technology and then connecting the data to your robots that they might actually be able to respond according to the analytics that they get from the direct brain data? Sorry, thank you. Yeah, I think that, yeah, so far within our projects, we're really looking at more conventional kind of communication bandwidth. So we're looking at things like behavior. Where are they looking? What is the eye gaze trajectory? A verbal behavior and things like that. But I think you're exactly right. Once you have a robot, then you have to leverage, well, how can you leverage the strength of the robot and the strength of the human? Of course, if you only use the humanoid interactions, then the robot will probably always be at a disadvantage of having on what your beliefs are on the singularity. But then once you have other things, you know, the other robot can access to other modalities. And I think you're exactly right. This is one area where we can really expand the capability. So it's a great point. I think maybe this kind of a brain-computer interface, you'll see more in a doctor's office or in a hospital setting rather than home in the future. Hi, thank you for your incredible presentation. I'm really interested in how the metaverse convergence of AI, metaverse and blockchain can help with mental health, the crisis that we're experiencing globally. I'm wondering how your digital avatar or your digital robot, have you considered the mental health applications for people, young people especially? Actually, we were asked by one of the groups in the WHO to test this with young people. It was during the pandemic. This is why we actually converted the avatar into helping with people in quarantine. So there was a lot of loneliness. You know, people were isolating at home. So they thought this kind of virtual companion will help, not just to monitor people's mental health, but also provide some kind of communication. But luckily, I think the pandemic is over, isn't it? And I really try to think that if we're gonna see that kind of situation again, but if that happens again, now we know what we need or how we can, you know, preemptively prevent a mental health crisis by connecting people. So I don't know. Think that robots will replace human companionship. So as I said in our robotic project, one main function, one critical function is for the robot to initiate a contact or communication channel between my mother and me. So my mother has an iPad, has an iPhone, everything, but she still doesn't know how to use it to call me. She just memorizes my mobile number. She just dialed the number still. So the robot has to be proactively talking to her and saying, you know, you wanna talk to your daughter now and initiate that kind of call, right? So I think this is helpful, but it's not gonna replace human communication and human companionship. One project that I wanna mention that we didn't mention here is actually a collaboration between myself, a life scientist and actually a Japanese landscape designer because one of the things that have been shown is that actually connection with the natural world is very important. And so we actually worked with the life scientists and the natural designer to actually build a Japanese garden within our campus, bringing people there and actually studying their responses through some of the technology like eye gaze responses and seeing how those eye gaze responses are correlated with their relaxation responses because when you people are sitting and even you're sitting at an environment and looking at it, you're not passive. You're actually engaging through the way that you look and how you look. And we found that that was actually correlated with the relaxation response. And we're looking to see whether those things are actually replicated in VR as well. Yeah. Any other questions or comments? Thank you all for your all attention. Yeah, thank you for coming to this session. And we really enjoyed the interaction with you. Yeah, thank you. Thank you. Thank you.