 Dennis basically only once in a phone call about six years ago. Okay, cool. Well then we start with a short introduction and also by DiD for this meeting. Then if I introduce myself, I work as an audiologist in Nijmegen. I started my training after a study of physics in Utrecht and I think during one of the courses I followed I met Bert. He gave I think a course about signal processing in Delft. He was invited lecturer and ever since we stayed in touch that sometimes I ask about something when I have ideas for some projects for instance and on advice how to do this and for instance when I think started thinking about computational audiology and using machine learning, Bert already knew the work by Dennis Barber. So that was more or less next in line by writing this prospective paper. I met Dennis and exchanged nice ideas about the potential of active learning and I think Dennis last, Joseph, we met each other for the PCCA last year via Tobias Göring. I already had seen some of your work when we were writing this Computational Audiology paper and more recently for a scoping review with the Wetzwana Pool and Rob Eichelbaum and our colleague in Nijmegen. We have looked into all the digital approaches or automated audiometry that have been published since 2013 and I wondered how many groups would be working on machine learning audiometry and yeah we found three, that is three of you and the paper by the group by Bert De Vries was then still a preprint and I think we excluded it because it lacked experimental validation but yeah we didn't find any other groups and now this scoping review has just been accepted and published and I thought it would be a great opportunity to talk with the three of you about what developments you further expect and yeah one of the reasons for me to write this scoping review was also to better understand the barriers. Why is it not used in the clinics yet and how does it compare to other automated audiometry approaches? So that's briefly for me the motivation to contact you and to contact you and I'm really glad that you all replied positively and so for this interview I think with the questions and answers you already provided we have enough to fill a block and to share thoughts of the potential and further development so I would say everybody is free to share ideas and we don't need to worry too much about being able to publish something on the website and of course that's something which is also more a fun experiment and if you enjoy this meeting that is more important for for now so having said that maybe good Joseph if you further introduce yourself and then Dennis and Burtgen maybe to explain more about their motivation as well. Yeah so I'm a lecturer now in Manchester before that I was a postdoc in Cambridge and before that I did PhD in psychology in Germany and studied electrical engineering so that's my background and machine learning audiology started in Cambridge on a ground for Bayesian active learning applied to what Brian Murdas and Richard Turner do. Yeah that's basically my brief background I think I've met Dennis and Jan Willem at the VCCA. Yeah I think Burt it's the first time that we meet. Yeah I think so. Yeah well and Joseph anything for this meeting that you would like to get out of it or your motivation to join this? Yeah I think it's a fantastic idea that you do that Jan Willem because we have our scientific publications but when you write the block you might bring it closer to clinicians and maybe even companies so that they finally start implementing it. Yes I think that can be really something you can do with this block that some of your papers also the conjoint analysis paper by by Dennis for instance it's quite dense to to read and I think for many clinicians yeah if you're busy you won't be able to to read it and think ah this is something we need to bring to our clinic. Agreed. Well Dennis would you like to further tell something about your background? Sure I'm currently a professor of biomedical engineering at Washington University in St. Louis. My educational background is electrical engineering biomedical engineering neuroscience and medicine I also have an MD and my lab that I set up at Wash U was a primate neurophysiology lab so we were recording neurons in the auditory cortex and trying to understand function you know complex vocalization processing in a vocal primate species and as we started doing a little bit more with humans trying to replicate some of our findings not oh we did some electric electrode recordings but mostly behavioral data we became interested in perceptual training and trying to induce therapeutic changes in in brain function or improving signal detection specifically speech processing and as we started thinking about what uh that that literature if you know that literature is very confusing in the sense of um the ability to achieve perceptual training effects that are persistent and transfer in any complex domain it's very spotty it's it depends on the lab and the preparation and it's it's not a highly reproducible set of of data essentially in the field and so we started reasoning how can we optimize the training trials for each person and that turns out to be a very hard problem but we stumbled across this idea of active learning for not therapeutic purposes for training but for testing and the the opportunity to speed up testing procedures really seem to present itself with this set of tools that we were using and that has taken over what we do so we're not working on training at the moment we're not even working on monkeys anymore we're really thinking about um and this is why I also appreciate this the idea of a blog entry that might get things closer to clinicians we think that now the way I'm contemplating this is I still want to get back to training but very complex latent constructs especially that bridge perception and cognition um like speech there are there are many points of failure in noisy speech processing that can happen from from the ear to the brain right and those are those fall in between perceptual and cognitive phenomena and I'd like to build models and testing regimes that can bridge those gaps and the amount of data required using classical methods is just prohibited for for unifying that kind of thing so my goal nowadays is to try to extend the the audiogram was just our test case that we could it turns out to be quite successful but it was really just us showing our proving to ourselves that this this approach would work and add value and also kind of giving us some background to be able to do more complex model construction and so that's my interest in this meeting would be exactly you know to take those ideas out and and promote them as a possible future for um behavioral testing in a wide variety of fields that's cool and also what what you mentioned I think if I would translate it to clinicians this inducing a change in the brain is what we do with cochlear implants when we're starting fitting them a big problem is that what we do during the first month really changes the brain so that you cannot just say okay we go back and do it again there's like a hysteresis loop that you have made changes and people have become accustomed to something and that's a problem I think we are not able to tackle yet in the clinic so Paul Hovert from Belgium he did this review about fitting procedures and turned out that every clinic had their own procedure and all many of these procedures turned out to be fine because yeah the brain adapts apparently to this so in terms of trying to relate to what clinicians experience I think this is an example and the other thing is that yeah you started with the audiogram and in our scoping review we found that the mean testing times are around five minutes for a test so if you're talking about time efficiency then there's not so much room for improvement and that could also be a reason that maybe the clinicians are not yet tempted too much to to adopt this but if you show that you can combine it with other tests to speed it up and with more complex theory that might be a stronger case that might be a more succinct way of summarizing my interest is joining the audiogram to other relevant tests into a unified or mostly unified testing procedure that integrates the data across all the tests to decide what the optimal sequence of queries is going to be and I guess there's also a bridge to the work by that and fitting hearing aids and the complexities they're experienced I think by clinicians for instance yeah so shall I introduce myself yeah okay yeah okay yeah I'm so I work as a professor in electrical engineering signal processing specifically in at the university technical university in eindhoven I still have a small affiliation with GN at the time that we wrote the but at the work on the active learning for the audiogram I worked in much larger capacity for GN than currently so my interest is in designing in basically automating the design of algorithms right the the the brain is not born with the capacity for speech understanding we learn to understand speech well basically through spontaneous interactions with our environment and we learn to walk and to recognize objects there's a beautiful theory pioneered by Carl Friston on how the brain computes it's called the free energy principle it's a very Bayesian probabilistic theory and my goal at my lab here at university is to translate those ideas to engineering to build agents that learn yeah purposeful behavior just naturally through spontaneous interactions with the with the environment that could include speech recognition or object recognition but may also be for robots the active learning paper it just seemed like since since my approach in our lab is very Bayesian it seemed this was around 2014 that I had a PhD student Marco Cox and and in a discussion we just figured out while this whole audiogram taking seems like a classification problem you have a discrimination boundary and everything below is one class you cannot hear it and above you can hear it so let's just build a Bayesian classifier for this and and he did and yeah I turned out that Dennis had the same idea around the same time maybe even a few months earlier and then so yeah so then we we Marco wrote that paper and and put it on archive and we found out that Dennis had basically written about the same paper and um yeah so that's that's the the story on and then it took a long time I think recently Marco made some improvements to his design with a new prior and a mixture of Gaussian processes and that's our paper 2021 okay yeah i missed it i didn't read the update yet no it's it's fine okay yeah i was also surprised about uh you're bringing up the free energy principle by by friston and yeah for what i heard so far about try to understand about it was it's quite complex in also using it for actual predictions but that's or that's some of the critiques i think on this model the the okay so the idea here is that um what calfriston says is the brain follows just the laws of physics and the laws of physics there is sort of an umbrella framework for describing the laws of physics that's called the principle of least action you can write down a function or technically a functional and if you minimize that functional you can derive mechanics classical mechanics electrodynamics basically all the branches of physics and you can also do this in sort of an information you can write the information theoretic formulation of this and then it turns into what we call the machine learning variational base and he claims that that's all that's going on in the brain just following the laws of physics it just turns out that if you write that down in an information theoretical way then you're doing also Bayesian reasoning so following the principle of least action in the brain leads to Bayesian reasoning which is machine learning and so you can use it then for yeah for information processing for designing algorithms for all kinds of stuff for learning how to walk and learning how to hear and the what we do in my lab is not to verify that claim but uses as an inspiration for engineering i have an engineering uh lab um where the design of hearing aid algorithms and fitting of hearing aid algorithms is is an interesting application area but the but the principle is broad enough to include yeah lots of other applications you could think of self-driving cars or robots that learn to walk in the the key observation is that since this is a sort of like an umbrella kind of theory it's basically a one solution method to all the problems right in engineering when you start with a problem and then you go to the literature you find 15 solutions and you modify one and now you have 16 solutions and the brain turns that around the brain cannot afford for every problem to come up with a new solution method there's just one solution method free energy minimization or following principle of least action one solution method that learns both the problem and its solution simultaneously and it also scores how well the problem is represented and how well the solution solves the problem so there is one cost function for all problems and that's that makes it really nice also for engineering because i can apply it anywhere um including to to audiology if you if you want but also also to other areas and another way to to say this is that you apply probability probability theory then to the problem or is that an oversimplification well this is simplification but this is also very accurate and that's also i mean i mean i think in in in essence the three approaches by Dennis Joseph and and and Marco because it's mostly Marco's work Marco Cox's work very similar it's all it's it's a Bayesian classifier for an interesting problem you know finding an audiogram but that framework of active learning could really is really broader can be applied to much broader sets of problems than just to to taking an audiogram yeah i think also this was one of the questions that was yeah Joseph i think you had the question that the similar applications outside audiology and you also started with us are we constrained only to the audiogram in this discussion so i guess this question already is coming up here naturally and i guess the three of you all have your own thoughts or directions on where you are heading towards it's an idea to that you for yeah briefly mentioned these future directions you're aiming for or applications you think are interesting Joseph would you start yeah i can't say it again well um yeah the audiogram is the sort of the ideal test bed because it's simple it has those two dimensions one of them is distance based so it could be even more than one dimension just something that gives you a distance that's frequency and the other dimension level in the classification problem is that monotonic dimension so by having these two dimensions it's the ideal test bed and as you said we can put lots of effort into it then decrease testing time from five to three minutes or even two minutes which is from a scientific point of view exciting from a practical point of view um it's not too much so it's an ideal test bed but we really want to use it for further tests and within audiology there are lots of further tests that can be done like then as said them speech tests with speech you have the problem that you have not two variables but many and you have to identify which one do you want to learn and so and we have done a similar thing with the notched noise test so in a notched noise test you have about eight variables signal frequency, master frequencies level and so on notched width and eight eight variables are too much the curse of dimension LAT just hits and so you have to tailor that problem to reduce your variables what we did we reduced it to three variables but one of them was signal frequency which wasn't done at notched noise test before you typically test one auditory filter at one frequency but the huge advantage of the Bayesian active learning is like in our audiogram approaches when you have a continuous frequency then suddenly you get the auditory filters across the whole hearing range and then suddenly you have a huge advantage because testing several auditory filters at four or five frequencies takes two hours or longer and with that active learning you can do it in half an hour and that's a real difference for the clinic you can put the patient into the booth for half an hour but not for two three or four hours and the audiologist doesn't need to be present during the half hour those tests are automatic they can't correct for errors because everything's probabilistic figure out that one answer is so unlikely that it was just the wrong button press so that's the beauty of these tests and yeah we have worked for a few further tests like auditory filters that regions that regions were without them Gaussian processes but also Bayesian equal out those contours there are probably many more to do and I think the three groups of us will work in slightly different directions and provide many more tests so that's great for audiology but we should keep in mind that there is a much broader field of application in the whole field of healthcare where you ask patients questions where you do more than one tester can do more than one test then you want these Bayesian approaches for example like when you measure your blood values then you need to take a needle so that makes it less good as a test bed and then you can choose some which values you want to analyze and that costs money so if you have a Bayesian approach for that that tells you which values are interesting to analyze and the doctor's opinions that that sort of thing is where we as audiology could be the pioneers because we have such an easy test bed and other fields of medicine could identify where they can use our approaches and integrate it into their practice. Yeah it's a really cool broader perspective and I think cool I had to also return to this in this discussion also since this is what you mentioned pioneering really a paradigm change in medicine if you would do this and so I would like to discuss this further but I think Bert and Dennis you also have ideas of further applications so Dennis what further applications are you considering? Yeah I'll say I just agree with that assessment 100%. I keep saying this in talks like all of those points that Joseph just made and I think they go over most people's heads I really believe audiology can be an example because our problems are tractable I won't call them easy but they're tractable for these approaches and the same kinds of approaches could be postulated in other fields it's just harder to think how to state the problem in a way that's actually solvable in the same way so we're starting that direction for like I mentioned earlier my interest in bridging perceptual and cognitive constructs so we're starting now building cognitive models in the same way and it is it is considerably harder than psychophysics and it's because the feature space comes for free in psychophysics but in cognitive spaces there's no even general agreement about what the feature space is what is memory it can only be operationally defined and everyone has their own operational definition this to give you an example so we're generalizing the principles the model construction the active learning and then the population level analysis I'll talk about this to these the latent variables we're trying to generalize what we're doing to active latent variable modeling is the way to say it I would say and how we form these latent variable models it's not going to be as simple as pulling off the shelf kernels like we've used so far for this this probabilistic classifier so we're trying to figure out ways that we might do that both empirically from data and then from theoretical constraints that we can impose on the problem from other knowledge and we're making progress but it it is so much easier to operate in perceptual space that I we're still keeping projects alive there because we can make progress and I the machine learning audiogram that I've you know beaten into the ground is because I partly it is such a great model system for and it's like the audiogram I think of is literally a model system right it's a it's a use case that has a gold standard it's got it's the simplest complex kind of psychometric function you can postulate and it just makes a great test bed for you know can we speed up things a factor of two a factor four can can these methods work and if they if they don't work for the audiogram I wouldn't spend all this effort on trying to build these more complex latent variable models so I think the the next step for me is where I think we're going to get the biggest payoff for these methods is in procedures or diagnostic paths trajectories for disorders that are highly variable within the population so when there's great population heterogeneity you can't just average across big core you can't just take a little data from a lot of people and then plug new patients into this this population somehow and understand the best way going forward so we in audiology also realize that because every fitting procedure is at least a cochlear implant level and I would argue even from the hearing aid level everything is individualized in a sense you can't just blindly pull plans rehab plans off the shelf and apply to everyone so we already have this mentality that things need to be individualized and that doesn't really exist throughout the bulk of medicine even though the concept of precision medicine is all about adapting your therapies to the individual it's just not being thought about in the same way as we in rehabilitation we think about it so my goal is to expand these tools into a space where we can conceptualize more complex latent constructs in brain and behavior so I'm not going to leave brain and behavior because I think there's plenty of space plenty of work to be done there but I want to use all of these as templates for the rest of medicine to say oh these active learning procedures are really useful when you have highly variable manifestations of disease and we can reduce the amount of data that we need to collect from each person to make a diagnosis for them and then ultimately decide on the optimal treatment and it could vary you might these methods might ultimately lead to a rational selection of completely individualized treatments for for each individual and without without having you know the way the clinical trials work is you give a cohort an intervention and if it works then the range at which that intervention is deemed to be relevant is who was in that cohort so we're trying to break out of this population or cohort level rules of inference and and bring in the the the formal ability to infer across a variation in in the populations and still pick the best choice for them and I think these basing methods are ideal for that well I think Joseph just explained a needed a paradigm change that we need but you also now run into the same problem I would say that for instance these cohort studies also to get FDA approval or a CAE marking you need to test something on a group and as clinician I should say that often we just have a single solution for hearing aids there's a prescription rule and you more or less it's a one size fits all you give to almost everybody and sometimes is some tweaking but then you run into a really gray zone that you don't really know what you're doing and it's based on previous experience but I think if you ask clinicians their fair assessment of how certain they are or what they're doing it's probably either trial error or something that they've done before and worked on that particular patient so they just try it again and that's also sometimes I think what for the clinicians is their reason to be like this is really where they excel like oh I know this experience on the cohort this person is really different and will together the patient and me will look for a decent solution or if you give up after a couple of trials you say okay now we start to counsel on how to cope with this limitation that we cannot solve and I think what you propose would help the clinician in this search but it's also a leap of faith in the sense that would I as clinician also be able to understand the procedures that I'm following then or it's the kind of black box that's providing me advice and would a clinician still be needed or is it better that the algorithm directly interacts with the patients for instance. My brief answer is I absolutely believe clinicians always needed clinician and patient and and maybe supporting family need to be involved in the decisions the Bayesian methods can provide guidance and suggestions but they don't provide value right they can't I mean you can define a cost function but the that's outside the scope of the algorithm right it has to be defined by the people involved and so these are these are clinical decision support tools and that's that's the right that's a terminology within AI and medicine and that's the right terminology here they're they're here to for very complex scenarios where the human brain doesn't doesn't log conditional probabilities well which we don't do very well you rely on the algorithm to compute those things and then evaluate at the outcomes level essentially. So then I guess for working out this interview it's important that we stress that point of why the clinicians are needed but also what changes in maybe mindset or approach for clinicians is needed and how to get them curious to do this and what I maybe can add here is that for the website we see a lot of people from all over the globe visiting the website also from many countries in Africa or Asia and I think that there is curiosity but also fear in the sense of well so for instance an audiology trainer who was a little bit depressed that she told me that or she shared on LinkedIn that many of their students were uncertain what kind of job they would get and she actually made a call could you please share positive experiences of how your job can develop and the opportunity so more or less on my to-do list to say well there's a lot of things that you can explore to improve your clinical care and yeah we just wrote this Wikipedia article about computational audiology and what I like about it is that I learned that my one sentence summary would be translating models into clinical care and I think also with Belfrice two years ago already in this discussion he said that it's a model-based approach but this translating into clinical care yeah it's of course really important that clinicians there are involved and and see the potential you know young villain I have a suggestion actually because I have had I believe five audiology students come through my lab and publish papers with me including that very first paper so maybe and I have found that the younger the students are very interested in this technology and see older practitioners that are the most skeptical so it might be interesting to interview students who have worked with these automated and or machine learning based methods and get some of those interviews up on the web I think that could be interesting yeah that's a good idea then let's continue with this question to you but especially since I think we are making this application space bigger and bigger and probably what you mentioned already what your lab is doing it's the biggest space more or less so how would you define the applications and how to use the scope of maybe medicine then if you say audiology is maybe some kind of example case but the real benefit will be when you do this in in medicine at large or society at large yeah I mean I first I completely side with what Joseph and Dennis have said right I mean I am not an audiologist we are all three by training originally electrical engineers so you need to keep that also in mind right so we have a very computational view on this whole field and that may not be the the best for I mean Dennis is also a physician so that's he's maybe different but so in my view I mean just I have a few comments also about what the what I just heard it's it's important to remember that I mean the audio grab just tries to estimate the hearing threshold but the hearing threshold is not something that something physical in your brain it's a variable in a model it's just it's a variable in a model that we write down right and with that model we can the model we can predict if we give a stimulus to a user really answer or not and all the Bayesian method does is well it provides a framework for estimating that variable but the Bayesian framework is broader right it doesn't I mean it can estimate any kind of variable so my interest the next step would be oh let's estimate more parameters in the hearing aid algorithm and in my lab we are really interested in using the Bayesian approach so not just estimate the parameters fitting but just estimate or derive the whole algorithm let's just derive the whole hearing aid algorithm so that's a long-term ambition that will take years and and and we'll see how far we get but in principle there's nothing that would stop us from from from doing that having said that there is nothing that we do here we never see a patient right it's it's it's sort of an isolated exercise here that we hope at some point clinicians can use and take to their patients right I mean none of my students will ever see a hearing aid patient we have no idea about how to deal with a hearing aid patient we do technical work it's very interesting and I think there is a chance that with the the the the work that we do that I should just move about and you interact with with an agent which oh I like what I'm hearing I don't like what I'm hearing that over time we can design a hearing aid algorithm how that is used by a clinician who is talking to his patient is a completely different profession and I think it's up to the so my advice would be for for for audiologists and clinicians try to stay interested in what happens on the technical side I mean you don't have to know the details about Bayesian inference but Bayesian inference is an important growing field it's really interesting to learn something about it it's it's it's it's I mean there's a reason why we all three are so enthusiastic about it and it's very important also for I think for audiology and and just and reinvent yourself right and I would say the same thing to signal processing engineers in the hearing industry because if this works if we build ages that design algorithms then what do we do with the signal processing engineers right so it's not a problem just it's a problem for for all of us right and also for myself so we all have this problem I think of a bit of anxiety about the future and probably the soup will not be eaten as hot at this as it's served right so let's come down to one of my favorite quotes actually never have no fear it's from the movie the the cruise uh it's very very fun to to watch about a family in the stone age living and they're afraid for anything and then their whole world starts to change and so that was a funny reminder but um it also what you're saying now is I think maybe people reading about your work would think ah okay I need to find a faster way to get to the thresholds but the reason in the first time we were measuring thresholds is that it's too complex to measure responses of people to all sounds because that I think you would want to do optimize how people hear any sound but instead we were able to measure thresholds and then use the half gain rule more or less how much gain amplification of sound we could provide for patients and I think we have even forgotten about the ideal of making any sound audible and another thing maybe that you are now touching upon is how to get to ecologically valid assessments because what we do is measuring in a sound booth really artificial sounds some pure about pure tones are not artificial but only pure tones or maybe warbles and while yeah you would like to know how people are actually hearing in their daily lives and how to optimize it in that situation so that's of course also really close to the hearing aid fittings I think in optimized situations yeah yeah no I think eventually I think most of hearing it fitting should take place in the field where the problems happen right here I mean you don't want to fit anything until you have a problem and then you solve the problem and you move on and you solve the next problem and you move on and this is sort of how we bumble through life and I think it's also how we should teach our hearing aids how to behave and yeah yeah so and in principle you don't need a hearing threshold you can I mean you could let's say you don't need in principle pure tone audiometry to estimate the hearing threshold you can if it's a variable in your model and then it's just a latent variable and Bayesian methods if you get responses from patients to other stimuli you may also be able to estimate that hearing threshold and maybe you will find out that in order to to listen well to sounds we don't even need a very accurate hearing threshold other parameters may be more important right so the the hard focus on getting the most accurate hearing threshold is something I think that this will over time be less important and also I view yeah I'm not sure if the others agree with that but that's that's that's how I feel about that I'm not just like can I say quickly I agree and I have focused a lot on these hearing thresholds audiogram but I view I view that as a as a proving grounds right yeah being able to incorporate more interesting latent variables I would love to eliminate the sound booth entirely because of the ecological validity question right that just um yeah it's got issues there I agree I agree and I think maybe one of you also wrote it in the answers already the I guess we clinicians also have a term the hidden hearing loss which is which shows that we we cannot measure this but when we would use active learning I guess we will be better able to pinpoint these cases of hidden hearing loss where we don't have the sensitivity to detect hearing difficulties and that we have this difference between what the patient reports in problems and what we measure in our sound booth and I think in terms of time one of the challenges was that you could also of course ask each other questions like now's the time for what you maybe you thought when reading each other's work or what you're curious about so we make a round of questions to one another okay now I just talked a lot so maybe the other the other person should start Dennis do you have a question for for Joseph for instance well uh well my my original questions were origin story questions like how how how everyone in this in this group kind of got into this mode of thinking but what I've already taken away is that I think from from slightly different perspectives but maybe not so different since I didn't realize we were all trained in electrical engineering and signal processing I mean I considered myself a signal processing engineer I guess for it's why I got into auditory work at the very beginning and then ultimately into the neurobiology of hearing later but I I feel like I've I know that better now so it's there's some uniquenesses but I think we're kind of integrating other ideas out in other fields and taking them to the space and that was I guess I just have a comment that was that was interesting to hear and I would have asked that question if I didn't feel like I got a pretty pretty good sense of it already cool but maybe another question then or Joseph do you have a question you you want to ask Bert or Dennis well they have just shown all their future plans and use cases in great detail and I just agree with all the motivations I'm yeah I mean as a as a comment I mean our approaches are a bit different especially with the future plans I mean Bert's approach is big big data and with that free energy and variational base that's a very interesting area because he can handle many many more responses much more data with those approaches and I think that variational base has not been done too much so far in our applied Bayesian active learning field so I think that's that's a very interesting work that he's doing oh yeah and so say there was a surprise that there's a lot of alignment between the three of you in well of course for the approaches that not a surprise but also in and maybe you should have invited another person or clinicians to get this better this contrast on what's used in practice and what new tools are developed so your ideas of maybe also doing some interviews with clinicians I'll think about it and what I also wanted to share is that in November there was a group a reading group about computational audiology which was initiated by lecturer in audiology from Texas so she was interested in machine learning in her field and so she contacted me in I think October or something and we put it on the website and then I think eight people responded and followed the same course on Coursera about artificial intelligence and the eight of them had discussions on Sunday every morning Sunday morning in that month and yeah that were really nice discussions we they were and done in in Slack and of course people were just typing in their responses and thoughts about these lessons and she had prepared always two or three questions for the week based on the the course that was yeah with the whole group was seen more less in the same time frame so there's I could start with asking them their thoughts of how they tend to use these ideas into either clinical training or in clinical work and it shows that it's maybe still a tiny group but there's people really curious about these new methods and other things you want to share or otherwise we can sign off nice in time actually I do have a question maybe for everyone it's interesting I think that the three of us have converged in this particular space in I would say hearing science have you have any of you seen similar work going on in other fields that that's parallel and I can say in vision we just got a grant to do the same thing in vision a vision test there are there are Bayesian methods that have been used in in other psychophysical domains but this exact kind of approach that we're taking I just haven't seen elsewhere I'm wondering if you guys have seen that kind of thing and not not for a very long time and there's that paper in 1999 the conservation Tyler who did that vision thing who basically said that we are doing it that way which we are doing now but with the computation of the 1990s so much simpler but yeah after that I haven't seen too much in that field and out of personal interest I tried to reach out to rare diseases and auto immune diseases but so far without success so I attend those sand pits at my university at other UK universities but for some reasons the clinicians don't connect to that applied machine learning stuff I think in in most fields there is a sub community of Bayesian scientists or engineers who try to approach the classical problems in their fields from a Bayesian viewpoint like for me I mean I got interested in Bayesian approach around early 2000s like 2001 or two I'm not so sure by reading articles from these theoretical physicists who are applying it in cosmology because you know you're in astrophysics because experiments extremely expensive there so they have to do active learning they have to make sure that their experiments are informative and and then I thought well we work with people you know our experiments have to also be informative so that that made sense and and since then I started working on Bayesian methods and after an initial maybe it took like two years for me to realize that wow this approach covers basically the whole scientific endeavor right Bayesian approach is basically a description of science it should be part of every field and in every field there are there's a subgroup of people working on this also now in this way for hearing aids but I think in almost every community and some communities a little bigger a group and other communities are very small but I would encourage people I mean to to to to to study it and the papers that we wrote I mean it doesn't matter whether you read I think Joseph's paper or Marco's paper or Dennis paper it's a good test case it's a very clean problem if you have studied a little bit of Bayesian material and and and you can read and then you can actually say okay can I read the paper that Joseph wrote or the paper that Dennis wrote it's a very nice because if you can and you can actually understand the paper you will start to see oh but now I can apply this everywhere right these Gaussian processes they are actually I think it may come from the group that where Joseph used to work Richard Turner I mean he applies them everywhere right and there and there's a sub I think they even put them on the web and you can as and they're they're used to you can you can pay money and they will they will optimize your your optimization problem the software optimization problem so they are applied almost everywhere now yeah but it's it's it's still in every field a very small subgroup of people it's a good example from from Neil in Cambridge who had that paper in 2011 about Gaussian processes and active learning which interestingly wasn't published but has several hundred citations I think the machine learning world was skeptical at the time about that if you read it with the audiological view then the whole paper seems like okay you can use it for the audiogram but then then needed a few more years until Dennis was really the first to publish that and say yeah this is an audiogram this has an application this is not just machine learning toy stuff in some fancy mathematical words so a huge credit to Dennis and well Bert at the same time yeah it's a new practitioner is to apply it yeah but just going back to Bert's statement my favorite quote in this space is no Bayesians are born they're all converted so it's like most of the people who are real Bayesianists that I have worked with have some kind of epiphany story they've been scientists and they stumble across this literature and realize oh this describes exactly how I think about these things and no one ever taught it to me so I just reminded me of that and I do agree there there are Bayesianists operating in these different spaces but the the combination of Bayesianism and the exact models that we're using and the treatment of these psychophysical tasks as classification problems to solve and the types of models that kind of conglomeration together which is where the power the full power of these techniques emerge it's still I think we're leaders that that's my conclusion at this stage is our field is leading the charge because I'm I'm not seeing it at least at this degree in other other spaces wow I really like this so what I want to ask you maybe it's good if the three of you share your favorite quotes that we will put this in the block and also what I thought I put it in the chat and these resources that since we have been discussing a little bit using this in medicine at large it's important that people start to get an intuition and use cases and can play around with it so on this website with the resources so far we've collected tools models that can be used for remote audiology or for research purposes and I'm not sure if it's feasible for you but if we would be able to share a working active learning based model for instance an API or something or yeah then of course you have already something running that we imagine we would share it and people can get a feel of what it actually in practice can mean and also what it cannot mean had the limitations that could be really important in trying to lift these barriers that we see now but partially could be ignorance could be fear or it could be the sheer complexity that if you want to use this for multiple domains it's hard to plug in your audiology solution if the vision or other domains have no no such tools or just don't know your language so if you have something that you are free to share and yeah please consider it there's a lot of different ways we could share it had this website it's more a hub of repositories and resources so there's GitHub code but also final models in Zenodo or we can look into new actually this new ways of sharing models that a small group now of people with uh aloe brian and Matthew Thompson and Alicia Kalia Longa we've been yeah thinking about what would be the best way to do this because if I can take one or two more minutes about this that GitHub is also really limited in in for instance sharing these repositories and also in seeing what's already there and and and so just make it findable and accessible and that people can reuse code of others is for me already a quest that I do it in my free time since we don't have money for our funding for doing this but I think it will help in some respects of what we have discussed today and and it's also interesting to share best practices so that if you show well for our group this is the computational infrastructure that we use other groups may think ah I'm having similar struggles so maybe I will also use this tool or this toolbox like that is the auditory models the amt toolbox or the psychophysics toolbox and there's for um Gaussian processes there's all kinds of toolboxes both in python and and matlab and other things that sharing um either websites or tutorials or even working models could I guess help other researchers to make this transition I heard here that there are many people for instance struggling in changing from matlab to python because they see this potential and I'm sure that there are people who have already solved it for their own or for a small community and unlocking this for a bigger community I think will be rewarding and can have impact okay so yeah great I mean I will I will check if you have something to share cool well thanks in advance and yeah Joseph I think I already asked you that question because I saw that you already shared a model in the amt toolbox and I wondered if you had other ideas of what our tools we could use or maybe promote for other researchers and oh yeah the last thing I also wanted to mention is for the upcoming VCCA there will be a special session on predictive coding and we have been thinking about uh inviting Paul Friston but we'll have a special session then with Emma Holmes and Bernhard Englitz from Nijmegen and I think a third research I don't know now by name but I guess it will be an interesting session for for you if you like to join super okay well then thank you for your time and for this interesting discussion um we'll see what the automated transcripts how much of the ideas survived just come in and paste yeah and put it online and then see what how well it's received so what I plan to do is to combine more or less some of your answers I think we just need to collect the highlights the the quotes and the most memorable things of what we have discussed and then we can put it on the website if the three of you like it and if there are things that you think oh well please rephrase it I'll I mean from my from my viewpoint you're also free to just post the video of the conversation you know I mean I personally I didn't say anything that I didn't want to say and I really enjoyed the others and the the viewpoints from from everybody here so yeah we can put it on Computational Biology TV and it's probably has 10 viewers a month we'll see okay well yeah we can of course share it also that way but still then maybe one or two paper summary is much more accessible than asking people to watch a video one hour thanks a lot Tom Willem it was really a great initiative I really enjoyed it thank you also Dennis and Joseph it was really good to meet you agreed this was fun yeah I agree thanks Jan Willem for organizing and yeah it was it was really great oh you're welcome and well we'll see how it will well when we will meet again or maybe we'll be able to meet in person in some conference or or maybe in a project if somehow these discussions lead to follow up work by the three of you that would be of course also really cool okay super okay okay then thanks a lot guys enjoy your weekend you bye bye