 computational audiology network and complex models to clinical care, digital health and patient-centered outcomes. We start with a short introduction and also an ID for this meeting. Then if I introduce myself, a work as an audiologist in Nijmegen, I started my training after a study of physics. And I think during one of the courses I followed, I met Berth. He gave a course about signal processing in Delft. He was invited lecturer. And ever since we stayed in touch, when I started thinking about computational audiology and using machine learning, Berth already knew the work by Dennis Barber by writing this computational audiology perspective paper. I met Dennis and exchanged nice ideas about the potential of active learning. And Joseph, we met each other for the VCCA last year via Tobias Göring. I already had seen some of your work when we were writing this computational audiology paper. And more recently, for a scoping review, we have looked into all the digital approaches or automated audiometry that have been published since 2013. And I wondered how many groups would be working on machine learning audiometry. And yeah, we found the three of you, but we didn't find any other groups. Now this scoping review has just been accepted and published. And I thought it would be a great opportunity to talk with the three of you about what developments you further expect. And one of the reasons for me to write this scoping review was also to better understand the barriers. Why is it not used in the clinics yet? And how does it compare to other automated audiometry approaches? So that's briefly for me the motivation to contact you and I'm really glad that you all replied positively. And so for this interview, I think with the questions and answers you already provided, we have enough to have to fill a block and to share thoughts of the potential and further development. So having said that, maybe good Joseph, if you further introduce yourself and then Dennis and Perth can maybe explain more about their motivation as well. Yes, I'm a lecturer now in Manchester before that I was a postdoc in Cambridge and before that I did PhD in psychology in Germany and studied electrical engineering. So that's my background and machine learning audiology started in Cambridge on the ground for Bayesian active learning applied to what Ryan Moore and Richard Turner do. Yeah, that's basically my brief background. I think I've met Dennis and Jan Willem at the VCCA. And Joseph, anything for this meeting that you would like to get out of it or your motivation to join this? Yeah, I think it's a fantastic idea that you do that, Jan Willem, because we have our scientific publications, but when you write the block you might bring it closer to clinicians and maybe even companies so that they finally start implementing it. Yes, I think that can be really something we can do with this block that some of your papers, also the conjoint analysis paper by Dennis for instance, it's quite dense to to read and I think for many clinicians, yeah, if you're busy you won't be able to to read it and think ah this is something we need to bring to our clinic. Great. Then would you like to further tell something about your background? Sure. I'm currently a professor of biomedical engineering at Washington University in St. Louis. My educational background is electrical engineering, biomedical engineering, neuroscience and medicine. I also have an MD. And my lab that I set up at WashU was the primate neurophysiology lab. So we were recording neurons in the auditory cortex and trying to understand function, complex vocalization processing in a vocal primate species. So we started doing a little bit more with humans, trying to replicate some of our findings, not only did some electrode recordings but mostly behavioral data. We became interested in perceptual training and trying to induce therapeutic changes in brain function for improving signal detection, specifically speech processing. And as we started thinking about what that literature, if you know that literature is very confusing in the sense of the ability to achieve perceptual training effects that are persistent and transfer in any complex domain, it's very spotty. It depends on the lab and the preparation and it's not a highly reproducible set of data essentially in the field. And so we started reasoning how can we optimize the training trials for each person. And that turns out to be a very hard problem, but we stumbled across this idea of active learning for not therapeutic purposes for training but for testing. And the opportunity to speed up testing procedures really seemed to present itself with this set of tools that we were using. And that has taken over what we do. So we're not working on training at the moment. We're not even working on monkeys anymore. We're really thinking about, and this is why I also appreciate this, the idea of a blog entry that might get things closer to clinicians. And we think that now, the way I'm contemplating this is, I still want to get back to training, but very complex latent constructs, especially that bridge perception and cognition like speech processing. There are, there are many points of failure in noisy speech processing that can happen from from the ear to the brain right and those are those fall in between perceptual and cognitive phenomena. And I like to build models and testing regimes that can bridge those gaps and the amount of data required using classical methods is just prohibited for unifying that kind of thing. So my goal nowadays is to try to extend the audiogram was just our test case that we could turn that to be quite successful but it was really just us showing our proving to ourselves that this approach would work and add value and also kind of giving us some background to do more complex model construction. And so that's my interest in this meeting would be exactly, you know, to take those ideas out and promote them as a possible future for behavioral testing in a wide variety of fields. That's cool. And also what you mentioned, I think that would translate it to clinicians is inducing a change in the brain is what we do with cochlear implants when we're starting fitting them. The big problem is that what we do during the first month really changes the brain so that you cannot just say, okay, we go back and do it again. There's like a hysteresis loop that you have made changes and people have become accustomed to something. And that's a problem. I think we are not able to tackle. And yet in the clinic so Paul Hoverts from Belgium he did this review about fitting procedures and turned out that every clinic had their own procedure, and many of these procedures turned out to be fine, because the brain adapts apparently to this so in terms of trying to put clinicians experience I think this is an example. And the other thing is that you started with the audiogram and in our scoping review we found that the mean testing times are around five minutes for a test. So if you're talking about time efficiency, then there's not so much room for improvement. And that could also be a reason that maybe the clinicians are not yet tempted too much to adopt this. But if you show that you can combine it with other tests to speed it up and with more complex theory that might be a stronger case. That might be a more succinct way of summarizing my interest is joining the audiogram to other relevant tests into a unified or mostly unified testing procedure that integrates the data across all the tests to decide what the optimal sequence of queries is going to be. And that's also a bridge to the workbook, but fitting hearing aids and the complexities, their experience I think by clinicians. Yeah, so shall I introduce myself. Yes, please. Okay, yeah, okay. Yeah, I'm so I work as a professor in electrical engineering signal processing specifically in at the technical university in Eindhoven. I have a small affiliation with GN at the time that we wrote the work on the active learning for the audiogram I've worked in much larger capacity for GN than they currently. So my interest is in designing in basically automating the design of algorithms, right the brain is not born with capacity for speech understanding we learn to understand speech. Well, basically through spontaneous interactions with our environment and we learn to walk and to recognize objects. There's a beautiful theory by Carl Friston on how the brain computes it's called the free energy principle is a very Bayesian probabilistic theory. And my goal at my lab here at university is to translate those ideas to engineering to build agents that learn purposeful behavior just naturally through spontaneous interactions with the environment that could include speech recognition or object recognition and we also before robots the active learning paper, it just seemed since my approach in our lab is very Bayesian it seemed. This was around 2014 that had a PhD student Marco Cox and then in a discussion we just figured out, while this whole audiogram taking seems like a classification problem you have a discrimination boundary and everything below is one class you can hear it and above you can hear it so let's just build a guy was a Bayesian classifier for this. And, and he did and yeah it turned out that Dennis had the same idea around the same time maybe even a few months earlier. And so then we Marco wrote that paper and put it on archive and we found out that Dennis had basically written about the same paper and yeah so that's the story on and then it took a long time. I think recently Marco made some improvements to his design with a new prior and a mixture of gaussian processes and that's our paper 2021. Yeah, Mr. I didn't read the update yet. No, I was also surprised but you're bringing up the free energy principle by by Kristen. And therefore, what I heard so far about try to understand about it was, it's quite complex in also using it for actual predictions but that's, but that's some of the critiques I think on this model. So here, here is that what Carl Friston says is the brain follows just the loss of physics and a loss of physics. There is sort of an umbrella framework for describing a loss of physics that's called the principle of least action you can write down a function or a functional. You can recognize that functional you can derive mechanics classical mechanics electrodynamics. Basically, all the branches of physics. And you can also do this in sort of an information you can write the information theoretic formulation of this. And then it turns into what we call machine learning variational base. And that's all that's going on in the brain just following the loss of physics. It just turns out that if you write that down in an information theoretical way. And you're doing also Bayesian reasoning so following the principle of least action in the brain leads to Bayesian reasoning which is machine learning and so you can use it then for yeah for information processing for designing algorithms for all kinds of stuff for machine learning how to hear. And the, what we do in my lab is not to verify that claim, but uses as an inspiration for engineering have an engineering lab where the design of hearing aid algorithms and fitting of hearing aid algorithms is an interesting application area, but the principle is broad enough to include lots of other applications think of self driving cars or robots that learn to walk in the key observation is that since this is like an umbrella kind of theory is basically a one solution method to all the problems right in engineering, you start with a problem and then you go to the literature you find 15 solutions, and you modify one and now you have 16 solutions and the brain turns that around but I cannot afford for every problem to come up with a new solution There's just one solution method free energy minimization or following principle of least action one solution method. That learns both the problem and its solution simultaneously and it also scores how well the problem is represented and how well. The solution solves the problem so there is one cost function for all problems and that's that makes it really nice also for engineering because I can apply it anywhere, including to to audiology if you if you want but also to other areas. Another way to say this is that you apply mobility theory then to. Yeah, or is that an oversimplification. Well, this is simplification but it's also very accurate and that's also, I mean, I think in in essence. The three approaches by Dennis Joseph and and and Marco because it's mostly my course work Marco cox's work. Very similar. It's all it's Beijing classifier for interesting problem. Finding an audiogram, but that framework of active learning could really is really broader can be applied to much broader sets of problems than just do to taking an audiogram. I think also this was one of the questions that was Joseph I think you had the question that the similar applications outside audiology and you also started with us. Are we constrained only to the audiogram in this discussion so I guess this question already is coming up here naturally. I guess the three of you all have your own thoughts or directions on where you are heading towards. It's an idea that you briefly mentioned these future directions you're aiming for, or applications you think are interesting Joseph would you start. Well, yeah, the audiogram is the sort of the ideal test bed because it's simple it has those two dimensions. One of them is distance based so it could be even more than one dimension just something that gives you a distance that's frequency. And the other dimension level in the classification problem is that monotonic dimension. So by having these two dimensions, it's the ideal test bed. And as you said that we can put so lots of effort into it then decrease testing time from five to three minutes, or even two minutes, which is from a scientific point of view, exciting from a practical point of view, it's not too much. So, it's an ideal test bed, but we really want to use it for further tests. And within audiology, there are lots of further tests that can be done like Dennis said speech tests with speech you have the problem that you have not two variables but many and you have to identify which one do you want to learn and so. And we have done a similar thing with the notched noise test so in a notched noise test you have about eight variables signal frequency master frequencies level and so on, notched with and eight eight variables are too much the curse of dimension. And so you have to tailor that problem to reduce your variables what we did we reduced it to three variables. But one of them was signal frequency, which wasn't done at much noise test before you typically test one auditory filter at one frequency. And the huge advantage of the Bayesian active learning is, like in our audiogram approaches when you have a continuous frequency and suddenly you get the auditory filters across the whole hearing range. And then suddenly you have a huge advantage because testing several auditory filters at four or five frequencies takes two hours or longer. The advantage is that active learning you can do it in half an hour. And that's a real difference for the clinic you can put the patient in the booth for half an hour, but not for two, three or four hours. And the audiologist doesn't need to be present during the time for those tests are automatic they can correct for errors because everything's probabilistic figure out that one answer is so unlikely that it was just a wrong button press. So that's the beauty of these tests and yeah we have worked for a few further tests like auditory filters that regions that regions were without them Gaussian processes but also Bayesian equal loudness contours. There are probably many more to do and I think the three groups of us will work in slightly different directions and provide many more tests so that's great for audiology but we should keep in mind that there is a much broader field of application in the whole field of healthcare where you ask patients questions where you do more than one tester can do more than one test and you want these Bayesian approaches. For example, when you measure your blood values, then you need to take a needle so that makes it less good as a test bed. And then you can choose some which values you want to analyze and that costs money. So if you have a Bayesian approach that tells you which values are interesting to analyze the doctor's opinions. That that sort of thing is where we as audiology could be the pioneers because we have such an easy test bed and other fields of medicine could identify where they can use our approaches and integrate it into their practice. It's a really cool broader perspective and I think we had to also return to this in this discussion also since this is what you mentioned with pioneering but really a paradigm change in medicine if you would do this. And so I would like to discuss this further, but I think Bert and Dennis you also have ideas for the applications so Dennis but further applications are you considering. I'll say, I just agree with that assessment 100%. It's, I keep saying this and talks like all of those points that Joseph just made. And I think they go over most people's head I really believe audiology can be an example for the retin because our problems are tractable I won't call them easy, but they're tractable for these approaches. So many kinds of approaches could be postulated in other fields. It's just harder to think how to state the problem in a way that's actually solvable in the same way so we're starting that direction for like I mentioned earlier my interest in bridging perceptual and cognitive constructs. We're now building cognitive models in the same way and it is considerably harder than psychophysics. And it's because the, the feature space comes for free in psychophysics, but in cognitive spaces, there's no even general agreement about what the feature space can only be operationally defined and everyone has their own operational definition is to give you an example so we're generalizing the, the principles, the model construction, the active learning, and then the population level analysis I'll talk about to these the latent variables we're trying to generalize what we're doing to active latent variable modeling is the way to say it I would say, and how we form these latent variable models. It's not going to be as simple as pulling off the shelf kernels like we've used so far for this probabilistic classifier so we're trying to figure out ways that we might do that both empirically from data and then from theoretical constraints that we can impose on the other knowledge. And we're making progress, but it is so much easier to operate in perceptual space, we're still keeping projects alive there because we can make progress and I've the machine learning audiogram that I've, you know, beaten into the ground is because I partly, it is such a great model system for it and it's like the audiogram, I think of as a literally model system. Right, it's a use case that has a gold standard it's got, it's the simplest complex kind of psychometric function you can postulate, and it just makes a great test bed for, you know, can we speed up things to factor to factor for the audiogram, I wouldn't spend all this effort on trying to build these more complex latent variable models. So, I think the next step for me is where I think we're going to get the biggest payoff for these methods is in procedures or diagnostic paths trajectories for disorders that are highly variable within the population. So when there's great population heterogeneity, you can't just average across big core you can't just take a little data from a lot of people, and then plug new patients into this population somehow and understand the best way going forward. So we in audiology also realize that because every fitting procedure is at least a cochlear implant level and I would argue even from the hearing aid level, everything is individualized. In a sense, you can't just blindly pull plans rehab plans off the shelf and apply to everyone. So we already have this mentality that things need to be individualized and that doesn't really exist throughout the bulk of medicine. Even though the concept of precision medicine is all about adapting your therapies to the individual. It's just not being thought about in the same way as we in rehabilitation we think about it. So my goal is to expand these tools into a space where we can conceptualize more complex latent constructs in brain and behavior. So I'm not going to leave brain and behavior because I think there's plenty of space, plenty of work to be done there. But I want to use all of these as templates for the rest of medicine to say, Oh, these active learning procedures are really useful when you have highly variable manifestations of disease and we can reduce the amount of data that we need to collect from each person to make a diagnosis for them and then ultimately decide on the optimal treatment. And it could vary. You might, these methods might ultimately lead to a rational selection of completely individualized treatments for each individual. And without having, you know, the way the clinical trials work is you give a cohort an intervention. And if it works, then the range at which that intervention is deemed to be relevant is who was in that cohort. So we're trying to break out of this population or cohort level rules of inference and bring in the formal ability to infer across a variation in populations and still pick the best choice for them. And I think these Bayesian methods are ideal for them. So I think Joseph just explained a paradigm change that we need, but you also now run into the same problem I would say that, for instance, these cohort studies also to get FDA approval or a CAE marking, you need to test something on a group. As a clinician, I should say that often we just have a single solution for hearing aids. There's a prescription rule, and you more or less it's a one size fits all you give to almost everybody. And sometimes it's some tweaking but then you run into a really gray zone that you don't really know what you're doing. And it's based on previous experience. So I think if you ask clinicians, they're fair assessment of how certain they are or what they're doing, it's probably either try or error or something that they've done before and worked on that particular patient so they just tried again. Yeah, that's also sometimes I think what for the clinicians is their reason to be like this is really where they excel like oh, I know this experience on the cohort this person is really different and together, the patient that will look for a decent solution. If you give up after a couple of trials you say okay now we start to counsel on how to cope with this limitation that we cannot solve. And I think what you propose would help the clinician in this search, but it's also a leap of faith in the sense that would I as clinician also be able to understand the procedures that I'm following them, or it's kind of black box that's a great advice and would a clinician still be needed or is it better that the algorithm directly interacts with the patients for instance. My brief answer is I absolutely believe clinicians always needed clinician and patient, and maybe supporting family need to be involved in the decisions. The Bayesian methods can provide guidance and suggestions, but they don't provide value. Right, they can't. I mean you can define a cost function, but the that's outside the scope of the algorithm right that has to be defined by the people involved. And so these are clinical decision support tools. And that's the right that's a terminology within AI medicine and that's the right terminology here, they're here to for very complex scenarios where the human brain doesn't log conditional probabilities. We don't do very well you rely on the algorithm to compute those things and then evaluate at the outcomes level, essentially. So working out this interview it's important that we stress that point of why the clinicians are needed but also what chains in mindset or approach for clinicians is needed and how to get them curious to do this. And what I maybe can add here is that for the website we see a lot of people from all over the globe visiting the website also from many countries in Africa or Asia and I think that there is curiosity. But also fear in the sense of social for instance, an audiology trainer who was a little bit depressed that she told me that she shared on LinkedIn that many of the students were uncertain what kind of job they would get. And she actually made a call, could you please share positive experiences of her how your job can develop and the opportunity so more or less am I to do list to say there's a lot of things that you can explore to improve your clinical care. And we just wrote this Wikipedia article about computational audiology and what I like about it is that I learned that my one sentence summary would be translating models into clinical care. And I think also with about two years ago already in this discussion he said that's a model based approach but this translating into clinical care. Yeah, it's of course really important that clinicians there are involved and and see the potential. You know, young villain. I have a suggestion actually because I have had, I believe five audiology student come through my lab and publish papers with me, including that very first paper. So maybe, and I have found that the younger the students are very interested in this technology and see older practitioners that are the most skeptical. So it might be interesting to interview students who have worked with these automated and or machine learning based methods and get some of those interviews up on the website. I think that could be interesting. Yeah, it's a good idea. Then, let's continue with this question to you, but especially since I think we're making this application space bigger and bigger and probably what you mentioned already what your lab is doing. What is this space more or less. So how would you define the applications and and how to use the scope of maybe medicine then if you say audiology is maybe some kind of example case, but the real benefit will be when you do this in medicine at large or society at large. Of course, I completely side with what Joseph and Dennis have said, right. I'm not an audiologist. We are all three by training originally electrical engineers. So you need to keep that also in mind. Right. So we have a very computational view on this whole field and that may not be the best for I mean Dennis is also a physician so that's he's maybe different but so in my view I mean just I have a few comments also about what I just heard. It's important to remember that if the audio grab just tries to estimate the hearing threshold but the hearing threshold is not something physical in your brain. It's a variable in a model. It's just a variable in a model that we write down right and with that model we can take the model we can predict if we give a stimulus to a user really answer or not. And all the Bayesian method does is well it provides a framework for estimating that variable, but the Bayesian framework is broader right it can estimate any kind of variable so my interest the next step would be. Let's estimate more parameters in the hearing aid algorithm. And in my lab, we are really interested in using the Bayesian approach so not just estimate the parameters fitting but just estimate or derive the whole algorithm. Let's just derive the whole hearing aid algorithm. It's a long term ambition that will take years and and we'll see how far we get but in principle there's nothing that would stop us from from from doing that, having said that there is nothing that we do here we never see a patient. It's sort of an isolated exercise here that we hope at some point clinicians can use and take to their patients. I mean, none of my students will ever see a hearing aid patient, we have no idea about how to deal with hearing a patient we do technical work it's very interesting. And I think there is a chance that the work that we do that I should just move about and you interact with an agent which oh I like what I'm hearing I don't like what I'm hearing that over time. We can design a hearing aid algorithm, how that is used by a clinician who is talking to his patient is a completely different profession. So my advice would be for audiologists and clinicians. Try to stay interested in what happens on the technical side I mean you don't have to know the details about Bayesian influence but Bayesian influence is an important growing field it's really interesting to learn something about it. I mean there's a reason why we all three are so enthusiastic about it. And it's very important also for I think for audiology and just and reinvent yourself. And I would say the same thing to signal processing engineers in the hearing industry because if this works if we build ages that design algorithms. Then what do we do with the signal processing engineers so it's not a problem just it's a problem for for all of us and also for myself. So we all have this problem I think of a bit of anxiety about the future and probably the soup will not be eaten as hot at this at this as it served. So let's come down to one of my favorite quotes actually never had no fear from the movie it's very fun to watch about a family and stone age living and they are afraid for anything. So what you're saying now is, I think, maybe people reading about your work would think, Okay, I need to find a faster way to get to the thresholds. But the reason in the first time we were measuring thresholds is that it's too complex to measure responses of people to all sounds, because that I think you would want to do optimize how people hear any sound. In fact, we were able to measure thresholds and then use the half game rule more or less how much gain amplification of sound we could provide for patients. And I think we have even forgotten about the ideal of making any sound audible. One thing maybe that you are now touching upon is how to get to ecologically valid assessments, because what we do is measuring in a sound book. Really artificial sounds only pure tones or maybe warbles while you would like to know how people are actually hearing in their daily lives and how to optimize it in that situation. So that's of course also really close to hearing aid fittings, I think, in optimized situations. Yeah, yeah, no, I think eventually, I think most of hearing it fitting should take place in the field where the problems happen right here. I mean, you don't want to fit anything until you have a problem, and then you solve the problem and you move on and you solve the next problem and you move on and this is sort of how we bumble through life and I think it's also how we should teach our hearing aids how to behave so and in principle you don't need a hearing threshold. Let's say you don't need in principle pure tone audiometry to estimate the hearing threshold you can, if it's a variable in your model and you then it's just a latent variable and Beijing methods. If you get responses from patients to other stimuli you may also be able to estimate that hearing threshold. And maybe you will find out that in order to listen well to sounds we don't even need a very accurate hearing threshold. Other parameters may be more important right so the hard focus on getting the most accurate hearing threshold is something I think that this will over time be less important. Yeah. I'm not sure if the others agree with that but that's how I feel about that. Can I say quickly I agree and I have focused a lot on these hearing thresholds audiogram but I view that as a proving grounds right yeah being able to incorporate more interesting latent variables I would love to eliminate the sound booth entirely because of the ecological validity question right that just got issues there. I agree. I agree. I think maybe one of you also wrote it in the answers already I guess we clinicians also have a term the hidden hearing loss which shows that we cannot measure this but when we would use active learning, I guess we will be better able to pinpoint these cases of hidden hearing where we don't have the sensitivity to detect hearing difficulties and that we have this difference between what the patient reports in problems and what we measure in our sound booth. I think in terms of time, one of the challenges was that you could also of course ask each other questions so we make a round of questions to one another. Dennis you have a question for Joseph for instance. My original questions were origin story questions like how everyone in this group kind of got into this mode of thinking but what I've already taken away is that I think from slightly different perspectives but maybe not so different since I didn't realize we were all trained in signal engineering and signal processing I mean I considered myself a signal processing engineer I guess for it's why I got into auditory work at the very beginning and then ultimately into the neurobiology of hearing later but I feel like I know that better now. So it's there's some uniquenesses but I think we're integrating other ideas out in other fields and taking them to the space. I guess I just have a comment that was interesting to hear and I would have asked that question if I didn't feel like I got a pretty good sense of it already but maybe another question then or Joseph do you have a question you want to ask birds or Dennis show and all their future plans and use cases in great detail and I just agree with all the motivations and as a comment I mean our approaches are a bit different especially with the future plans I mean birds approaches big data and with that free energy and that's a very interesting area because he can handle many more responses much more data with those approaches and think that variational base has not been done too much so far in our applied Bayesian active learning field so I think that's a very interesting work that he's doing. Oh yeah and so say that was a surprise that there's a lot of alignment between the three of you in well of course for the approaches that not a surprise but also in and maybe you should have invited another person or clinicians to get this better this contrast on what's used in practice and what new tools are developed so your ideas of maybe also doing some interviews with clinicians. I'll think about it and what I also wanted to share is that in November there was a reading group about computational audiology which was initiated by a lecturer in audiology from Texas. So she was interested in a machine learning in her field and so she contacted me in, I think, October and we put it on the website and then I think eight people responded and followed the same course on course era about artificial intelligence. And the eight of them had discussions on Sunday every morning Sunday morning in that month and that it were really nice discussions we they were done in in slack. And people are just typing in their responses and thoughts about these lessons and she had prepared always two or three questions for the week based on the the course that with the whole group was seen more less than same time frame. So there's, I could start with asking them their, their thoughts of how they tend to use these ideas into either clinical training or in clinical work. And it shows that it's maybe still a tiny group of this people really curious about these new methods other things you want to share or otherwise we can sign off nice in time. Actually, I do have a question, maybe for everyone. It's interesting, I think that the three of us have converged in this particular space, and I would say hearing science, have you have any of you seen similar work going on in other fields that that's parallel and I can say in we just got a grant to do the same thing a vision test. There are Bayesian methods that have been used in another psychophysical domains but this exact kind of approach that we're taking I just haven't seen elsewhere I'm wondering if you guys have seen that kind of thing. And not not for a very long time. And there's that paper in 1999, the conservation Tyler who did that vision saying, who basically said that we are doing it that way, which we are doing now, but with the computation of the 1990s so much simpler but yeah after that I haven't seen much in that field out of personal interest I tried to reach out to rare diseases and autoimmune diseases, but so far without success. So I attend those sand pits at my university at other UK universities. I think in most fields. There is a sub community of Bayesian, Bayesian scientists or engineers who try to approach the classical problems in their fields from a basic viewpoint. Like for me, I got interested in abrasion approach around early 2000s like 2001 or two. I'm not so sure by reading articles from these theoretical physicists to applying it in cosmology because in astrophysics experiments extremely expensive there so they have to do active learning they have to make sure that their experiments are informative. And, and then I thought well, we work with people, you know, our experiments have to also be informative so that that made sense and and since then I started working on Bayesian methods and after the initial. Maybe it took like two years for me to realize that my, wow, this approach covers basically the whole scientific endeavor right Bayesian approach is basically a description of science. It should be part of every field and in every field, there's a subgroup of people working on this. Also now in this over for hearing aids but I think in almost every community. And some communities a little bigger group and other communities are very small, but I would encourage people to study it and the papers that we wrote I mean it doesn't matter whether you read I think Joseph's paper or Marco's paper or Dennis paper. It's a good test case. It's a very clean problem if you have studied a little bit of Bayesian material and you can read and then you can actually say okay can I read the paper Joseph wrote or the paper that Dennis wrote. It's a very nice, because if you can, and you can actually understand that paper, you will start to see oh but now I can apply this. Everywhere these Gaussian processes, they are actually I think it may be it may come from the group that Joseph used to work with your Turner I mean he applies them everywhere I think they even put them on the web. And you can pay money and they will optimize your optimization problem the software optimization problem so they are applied almost everywhere now. Yeah, but it's still in every field a very small subgroup of people. This is a good example from Neil in Cambridge who had that paper in 2011 about Gaussian processes and active learning, which interestingly wasn't published but has several hundred citations. I think the machine learning world was skeptical at the time about that if you read it with the audiological view then the total paper seems like okay you can use it for the audiogram but then they needed a few more years until Dennis was really the first to publish that So yeah, this is an audiogram this has an application this is not just machine learning toy stuff in some mathematical words so a huge credit to Dennis and will burden at the same time. Yeah, my just going back to groups statement. My favorite quote in this space is no basins are born, they're all converted. So it's like, most of the people who are real basiness that I have worked with have some kind of epiphany story, they've been scientists and they stumble across this literature and realize, oh, this describes exactly how I think about these things and no one ever taught it. So I just reminded me of that. And I do agree there. There are basiness operating in these different spaces but the combination of basinism and the exact models that we're using and the treatment of these psychophysical tasks as classification problems to solve and the types of models that kind of conglomeration together which is where the power, the full power of these techniques emerge. And still, I think we're leaders. That's my conclusion at this stage is our field is leading the charge because I'm not seeing it that, at least at this degree in other spaces. Wow, I really like this so what I want to ask you maybe it's good if the three of you share your favorite quotes that we will put this in the block. I thought I put it in the chat and these resources that since we have been discussing a little bit using this in medicine at large it's important that people start to gather intuition and use cases and can play around with it. So on this website with the resources so far we collected tools, models that can be used for remote audiology or for research purposes. So if you have something that you are free to share, please consider it. So there's a lot of different ways we could share it at this website. Thanks a lot Tom Willem. It was really great initiative. I really enjoyed it. Thank you also Dennis and Joseph. It's really good to meet you. Agreed. This was fun. I agree. Thank you. It was really great. Thank you for listening to the first episode of the Computational Audiology Network podcast. Next guest Dennis Barber, Joseph Schlittenletscher and Bert De Vries. Podcast sound by design Steve Tade and Jan Willem Wassmann with contributions and help from Mark van Wondruitsch, Bass van Dijk and Rico Miglourini, the Wetswannepel, Alan Archer Boyd, Dennis Barber and L. O'Brien. Podcast production and host Jan Willem Wassmann.