 Okay, I'm going to talk today about how artificial intelligence, AI, can help us to bridge what is currently a yawning gap between the need for hearing health care and its provision. I'll start by giving a brief overview of some of the problems with the current hearing health care system and then provide just a few examples of places where I think AI can potentially help. And I think this will do a good job of setting the stage for the breakout discussions that we're going to have after this related to some of the practical challenges involved in getting advanced technologies into hearing services and devices. And one of my main messages is going to be that a lot of the challenges are in fact practical. In a lot of cases, we're not limited by the fundamentals of the science anymore. We just haven't done the hard work of getting our advances out of the lab and into the world where they can make a real impact. And my most important point is going to be that we shouldn't view this translational work as somebody else's job. We, the scientists, can and should and maybe even must take a leading role in getting our science out of the lab and into the world. The perspective I'm going to convey today has come out of my work with the Lancet Commission on Hearing Loss, which is still ongoing. Our main report won't be published until next year and that'll cover all aspects of hearing health care. But a group of us have already written a perspective article about the potential for AI in hearing health care. And you can find a preprint of that on my website, lesicolab.com, if you're interested. And that covers everything I'm going to talk about in the talk today and also much, much more. I need to start by pointing out what a huge public health problem hearing loss is. Many of you will already be familiar with these statistics, not least from the recent WHO report. So at the moment, there are four to 500 million people living with disabling hearing loss. And it's projected that this number will increase over the next few decades to something like 700 million. So this is a huge number. And the vast majority of these people are not receiving any treatment. This is the current headline average for the global service gap, which stands at 83%. So 83% of the people who need treatment for their hearing loss are not receiving it. This refers, I think, specifically to hearing aids. But you can imagine that the same sort of gap exists across all the different dimensions of hearing health care. And this is a global average, so it doesn't reflect the differences between countries, which can be significant. So in a country like the UK, the gap might be a bit smaller, something like 65%. Whereas in low and middle income countries, it might be higher, 90 or even 95%. And this failure to meet the need for hearing health care has serious consequences. I'll just show a few. One that gets a lot of attention, quite rightly, is the link between hearing loss and dementia. So hearing loss is the number one modifiable risk factor for dementia. And as the severity of hearing loss increases, so does the risk for dementia. The recent WHO report also included an estimate of the economic costs associated with hearing loss, which they put near something like $1 trillion annually. So these are huge numbers. I think the statistics only, of course, tell part of the story. They fail to convey what we all know are the potentially devastating personal consequences of hearing loss in terms of impact on quality of life. So this is a huge problem. So why is it that so much of hearing loss goes unaddressed? It's a relatively simple question, but the answer is, in fact, rather complex. I think it's helpful to consider two different dimensions of the problem. The first is accessibility. So we can ask more specifically, why are people who would be helped by existing treatments not receiving them? The simple answer is that there is a fundamental mismatch between the number of people in need and the number of people who can serve them. This is particularly acute in low and middle income countries. So this graph shows the number of audiologists per million people in two different sets of countries, high income and low income. And you can see that in low income countries, in most cases, there's less than one audiologist per million people. So in as much as hearing health care is dependent on audiologists, it's easy to see why in these countries, most of the people who need care are not receiving it. In high income countries, the problem isn't necessarily a lack of trained personnel, but there are other sorts of barriers in place, many of which are artificial and unnecessary, and that make it difficult for people to access the treatment they need. These issues have been well documented. In the US, there were reports by the President's Council of Advisors and the National Academies. A lot of people are working to see these barriers removed and there's been some progress. There's still a long way to go. The other dimension of the problem is efficacy. So even if we can magically give everyone in the world access to hearing health care as it exists today, we would still have problems because a lot of the care we have isn't as effective as we need it to be. And this applies to all aspects of care, from diagnosis to treatment. Hearing aids, for example, are helpful in some situations, but not others. This is a figure from a classic study, looking at the benefit of hearing aids for speech and noise. So on the y-axis, you have percent correct in a speech and noise task. On the x-axis, you have the speech to noise ratio. And this is showing performance for listeners with hearing loss with and without their hearing aid at a relatively low sound level, 52 dB SPL. You can see that the hearing aid provides quite a substantial benefit in this case. It's still not restoring performance to normal. A normal hearing listener would be closer to 100% for a task like this, but it's still quite a substantial benefit. If you have the same listeners perform this task at a higher sound level, in this case, a more realistic sound level for a typical social setting, 74 dB SPL, you can see that the benefit from the hearing aid is actually quite limited. And hearing aids are a big success. If you think about something like tinnitus, there really aren't any treatments that are widely effective at all. So the specific question here is, why aren't the treatments that we have better? And the answer is that hearing is complicated. Even if we forget about the complexities associated with real people in the real world and focus on the auditory system as an isolated entity, we're talking about a highly nonlinear process that transforms sensation into perception. And for most hearing loss, we're talking about problems that arise from interactions between many different underlying pathologies. This just isn't the sort of thing that traditional approaches to medicine or engineering can address in a meaningful way. So we've done the best we can and we've had some successes, but progress has inevitably been slow. So what can we do? Well, I think and I know most of you will agree with me that we're in luck because in the last few years, there've been incredible advances in artificial intelligence that can fundamentally change how we approach these problems. From an accessibility perspective, we can use AI to automate or semi-automate a lot of the routine tasks that at the moment are being performed by specialized staff that don't need to be. And from an efficacy perspective, we can use AI to get a handle on some of these complexities that we've been having trouble with thus far. So I just wanna give a few specific examples of ways in which I think AI can transform hearing healthcare. Most of these ideas will already be familiar to many of you, but I think it's useful to go through them just to frame the rest of the discussion. Let's start with assessment and in particular measurement of hearing loss. So at the moment, someone who's concerned about their hearing would go into a clinic and get an audiogram measured by an audiologist. The accessibility problems with this are obvious. I've already pointed out the dearth of audiologists, particularly in low and middle income countries. There are also problems with efficacy. An audiogram measures the ability to hear low level tones, which is of course only loosely related to the real world hearing problems that people actually care about. And it gives very little information that can be used for any sort of differential diagnosis. AI can help with accessibility by automating or semi-automating the measurement process. If we can remove the need for specialized staff or any staff at all, we can dramatically increase access. And there are already many prototype systems out there that can do this. AI can help with efficacy by facilitating a more comprehensive assessment. In a high end clinic, there are all sorts of measurements that can be taken, genetic screening, electrophysiology, speech and noise tests, so on. But at the moment, there's no principled way to combine this information to support detailed clinical inferences. So we're not making the best use of this information. We need AI for this. Humans can't reason effectively in high dimensions, but AI can. Okay, what about devices specifically hearing aids? We again have an accessibility problem because of a dependency on audiologists for fittings and adjustments. And of course the efficacy problems that I already discussed. AI can help by automating some of the device related services that are routine. And that can potentially help with efficacy as well because given the right data, AI has the potential to be better than any human at determining the optimal device settings. With efficacy though, the problems obviously go way beyond just suboptimal fitting. If we focus just on speech and noise, AI can help do better de-noising. It's again a question of dimensionality and non-linearity. If you have speech and noise intermixed in a single acoustic signal, how do you separate them? Current de-noising systems look only at low order statistics of the speech and noise. Try to separate them using relatively simple filters. That's just a very limited approach. Deep neural networks in contrast can learn to tease apart the speech and noise in arbitrarily complex ways. It's not gonna be perfect, but it can be a lot better than what's in devices at the moment. There are already a lot of impressive prototype systems out there. The real challenge here is getting something to work in the real world where it isn't necessarily obvious which speaker a listener is interested in at any given time. What if the acoustic scene consists of several people talking? Who should be amplified and who should be attenuated? One solution to this problem is cognitive control where a device infers from recordings of brain activity which talker a listener is attending to at any given moment. It's a great idea, but there's still a long way to go. Another option is to expand the scope of the device beyond hearing per se to provide a multimodal augmented reality. If the system includes glasses, then you can imagine for speech and noise maybe using eye tracking to help infer which talkers of interest at any given time or providing real-time speech to text in the glasses to help limit de-noising as in cutting it. All of these are great ideas. We also shouldn't be afraid to think really boldly about what AI might help us achieve. If we think about tinnitus, we can again potentially move toward more comprehensive assessments using all the different sorts of data that can be collected about a patient to make more useful predictions about which, if any of the treatments out there might be most effective for them, instead of just going with trial and error. And if we ever want to get to the root of the problem, we're going to need computational models to help us. A condition like tinnitus is an aberrant network state. It's an emergent property. I don't imagine that we're going to quote-unquote understand it in a way that is going to be clinically useful anytime soon. But if we could build artificial systems that can replicate it, we might be able to use those as platforms for testing different hypotheses and potential treatments. So there's a huge amount of potential here, but also a huge amount of work to be done. Let's just go through these again and think about what's needed in order to get these technologies out there. For automated audiograms, we already have working prototypes, what we need are robust applications. It's one thing to automate an audiogram in a clinic. It's a whole nother to do it in a community setting where so many factors are uncontrolled, but that's what we need to do if we're ever going to reach the majority of people in need. For comprehensive assessment, we're going to need much better datasets than we have now. DeepMirror Networks can learn to identify arbitrarily complex patterns, but only if they're provided with sufficient data. This entails all sorts of challenges related to privacy and to making sure that the datasets are representative. A system optimized for one population might not work well at all for others, and hearing is a global problem. We can't just focus on places where data are readily available. We have to develop the infrastructure to get it from wherever it's needed. For devices, a lot of the work that's ongoing right now is about getting systems that look promising in the lab to actually be useful in the real world. This is technical work, but it isn't necessarily about developing new core algorithms, but rather putting together what we already have into a package that people actually want to use. Not everyone wants to walk around looking like Robocop, so we need to strike a balance between efficacy and usability. For things like tinnitus that are probably longer-term goals, there's still a lot of basic science to be done. One example would be trying to identify reliable biomarkers. This would work well as an iterative process with AI at the center. Maybe we take some existing data and use AI to generate hypotheses by identifying some predictive patterns, and then we collect new data that are optimized for testing those hypotheses. For approaching hearing at the network level, we're gonna need new models of the auditory system that are not only phenomenologically accurate, but also mimic its key mechanistic features. It's hard to know in advance, of course, what those key features are and what the right level of detail is, so there's a lot of work still to be done there. Now, a lot of these to-dos are technical-ish, so I'm sure you can all imagine some of them that you'd be interested in and capable of taking on. But there's all sorts of other fundamental work that also needs to be done to facilitate the adoption of new technologies once they're ready. This includes the creation of new business models, part of the reason low and middle-income countries are so underserved is that there are no financial incentives in place at the moment for private companies to serve them, so that needs to change. And that'll probably only happen through some kind of private public partnerships. With all of these new technologies, we're gonna need sensible regulations and useful performance guidelines so that technology developers know that they can easily access their target markets and they know exactly what they should be designing their technology to do. And then of course, there's stigma. The increasing use of technology, hearables and so on, seems to be doing a lot to help with that, but the stigma associated with hearing loss goes beyond just a self-consciousness about wearing a device on your ear. Now, with these last few, I worry that you're thinking that they should be on someone else's to-do list, but not yours. I totally disagree. Who's better placed than a scientist to facilitate private public partnerships? We understand the need, we understand the technology and who it can help and who it can't, and we have a lot of experience with an access to public funding. Our understanding of the need and the technologies also means that we should absolutely be contributing to the development of regulations and performance guidelines. You have to have a deep understanding of these things and what's really important if you wanna avoid just a bunch of pointless red tape and box-ticking metrics that don't actually improve people's lives. And this list is, of course, just the tip of the iceberg. If you think about what aspects of getting technology into the real world interests you, I'm sure you'll have no trouble at all seeing where you can make a contribution. I wanna stick with this theme because I think the most valuable thing I can do today is convince at least a few of you to get serious about this. So this is your typical sort of translational pathway framed in terms of technology readiness level. It starts at the bottom with the basic science and the ideas, and then moves on to the development of prototypes and testing in the lab. And this is the part where we often stop, write a paper, and then maybe start the whole thing over again. And that's fine, that's an important part of our job to get our work out there so that others can build on it. But somebody needs to take the technology further out of the lab and into the world with all the challenges associated with that that I just talked about. So why not take out a leading role in that yourself? This part of the pathway in the middle is called the Valley of Death because so few technologies actually make it through. But in many cases it's not because they aren't worthwhile just because it's hard and no one's taking it on. Maybe the financial incentives aren't there. But if you think your work has the potential for real impact then why not see it through and find out? So I can imagine some early career researchers might be thinking yeah, this is all great but I need to get a job, I need to get tenure, I need to get promoted, and I need papers to do that. I have to say again, I totally disagree. I'm not saying you don't need to publish papers, of course you do. And I can't speak for every department and every research institute. But I can tell you for sure that at least in the UK real world impact is valued in a meaningful way. And it's one of the few ways that you can actually separate yourself from the pack. Everybody publishes papers. But not that many people can demonstrate that they've made a direct impact. And I think if we reorient ourselves in this direction as a field, it's gonna pay major dividends. I'm not gonna tell you personally what is and isn't going to be ultimately fulfilling for you but it's clear that there are a lot of talented people out there who do want to have a direct impact. And if we can position hearing as a field where that's the norm, we can suck up all of that talent. With respect to innovation around AI, hearing should be right at the center. We've got all the pieces, signal processing, language processing, biotech, help tech, wearables. It's such a natural context for AI. But at the moment we're at best on the sidelines and there's a real risk of us actually getting left behind. I think coupling our science with impact can help us avoid that and actually position us as a leading field for this sort of thing. The last point I wanna emphasize is that we need to think globally. Most hearing research might be in Europe and the US but most hearing loss isn't. This plot illustrates the global burden of hearing loss in a bit more detail than I started with. On the left you have the total number of people with disabling hearing loss, with data up to 2020 and predictions thereafter. Four to 500 million people currently going up to 700 million or so in the coming decades. But if you look at where these people are, you can see in the plot on the right that most are in the Western, Pacific and Southeast Asian regions, which include of course China and India and the surrounding countries. And also that some of the biggest percentage increases are gonna be in Africa. So someone needs to make it a priority to bring hearing healthcare to these areas. And of course this needs to be approached appropriately and potentially very differently for different areas. But that's true no matter what community you're trying to work with. As scientists our training, our expertise is in solving problems. But we need to apply ourselves to the right problems in the right ways. And we can only do that by listening and just trying to be a part of the effort and helping however we can. That might not always be by doing experiments or solving equations. Sharing our expertise is certainly an important part of it. But there's so much more that we can do in terms of mentoring and capacity building and communities that don't necessarily have a lot of resources. So if this sort of thing appeals to you at all, I'd encourage you to just put yourself out there as a willing enabler or a facilitator and just see what happens. And in the end, I'm sure you're gonna find that the new understanding and perspectives you get from this engagement are gonna make your science much better anyway. Okay, so I'll leave it at that with a lot of meat on the bone for the discussions. I'll just summarize the case I've been trying to make, which is really pretty straightforward. Hearing loss is a big problem with severe consequences and the ways in which it's currently addressed are woefully inadequate. We need to make dramatic improvements in both accessibility and efficacy. And frankly, AI is probably the only way we're gonna be able to do that. The demographics are just overwhelming. There's no way the current care model can be scaled to meet the need. There's a huge amount of work to be done here and of course I only scratched the surface with my examples. There'll be many more throughout the other talks today. And while some of this work is technical and quote unquote scientific, a lot of it is not, at least not in the traditional sense. But somebody needs to do it and in fact, there are a lot of reasons why it should be scientists. I tried to give an impression of the breadth of opportunities there are for getting involved. You've got entrepreneurship, policy, the clinical side of things. There's a lot of possibilities. And I also tried to emphasize that the problem is really global. And so the solutions need to be as well. So please share your thoughts during the discussions and if you don't get a chance to or there's more you wanna say, please do feel free to email me. I think we all just want hearing healthcare to be as good as it can be. And that means understanding all the different perspectives that are out there in order to find solutions that actually work. So thanks a lot for listening and I'm looking forward to the discussions.