 Hi there, I was happy to accept this invitation by Kevin Lieber to provide a course about Computational Audiology, Past, Present and Future Applications, and it will be a trip around the world of AI and Audiology within 80 slides, where I hope to share lots of interesting concepts, inspiring people and maybe even some heroes, and one of the first heroes I would like to mention is the French writer Jules Verne, who was the writer of Around the World in 80 Days. At the time it was called an adventure novel because the term science fiction was not coined yet. And what I think is important for this journey I would like to take with you is realize that it's always more complex and it's just one journey around the world if you like to say. So in time we're going from the past to present and future applications, but it's of course not a full picture of what's happening around the world and there's many more interesting projects that are unfortunately not able to discuss in this talk or that I may be not even aware of yet. So let's start. What I forgot to mention is that the quote, it's always more complex, is taken from Dr. Steven Ovella from the skeptic guide to the universe. So for this course there's three learning outcomes I would like to focus on. I want to provide you examples of computational audiology, provide backgrounds about AI so that you can better understand how it can be used in audiology and also assess some of the potential risks and opportunities and things we can do as clinicians to make these applications more viable and accessible by patients. You will not learn yourself how to build an AI system, but I will provide some resources and interesting material to study so that you can master AI and hopefully you will become more aware of how to use AI. So this talk is aimed at innovative clinicians and let's first dive into my background and then into AI. So I studied physics, worked briefly in aerospace and now I'm working in Nijmegen at the Radboud UMC as audiologist and also doing a part-time PhD and at our campus in Nijmegen there are really strong research groups in artificial intelligence, biophysics, ENT audiology, so there's the hearing and implant team and the hearing and genes team and since our organization has the strategy to create strong networks between care providers to have a significant impact on people's health, it was quite natural to develop my own motto, connecto ergopossum, so a network therefore I can, which probably also determines the strengths of a neural network since that's based on its connections and relative weight and when in 2019 I discussed some of the plans I had in using artificial intelligence and audiology, the computational pathologist Joon van Laak suggested to me, well what you're trying to do that sounds like computational audiology and well that was a nice umbrella term to start my personal journey in AI and audiology. So let's first have a look at AI and there I think we should start with the definition of an algorithm which is basically a set of step-by-step instructions that describe how to perform a task, but it's a finite set of instruction so it will end so that you don't get in an infinite loop, you may think of it as a recipe and it can be really complex but in the end you will finish your task and those tasks can be performed by humans or by computers and where does the term algorithm come from? Well here then our journey starts in the Middle East where Mohammed Ibn Musa Al-Qua reached me around 1825 and wrote books about computations and that's where the term was derived from. Let's continue this timeline but after I've explained more about artificial intelligence in general you could say there's two categories there's weak or narrow AI which is built for a particular set of tasks like chess playing, self-driving cars but also image classification, speech recognition or music recommendation and this is basically where now all the breakthroughs are and what's impressing us already. Then there is artificial general intelligence which would be an autonomous system that surpasses human capabilities in the majority of economically valuable tasks or you could also regard it as an artificial conscience. So far that's only known in science fiction so you can think of R2D2 and Star Wars or I think it's the voice of Charlotte Johansson in the movie Her but that's now still science fiction. So let's focus on narrow AI how does it work? Well basically you first define a task so I have here the example about speech recognition where a network has to convert spoken language into text and this can of course be useful for transcription services or voice assistance and then you need performance metrics so an example would be the word error rate and this measures the networks effectiveness and then we set the objective so that networks primary goal is to maximize the performance metric by continuously learning and improving its ability to recognize speech. This is done by training the neural network and by training and sometimes it supervises rewarding the system it will become better in extracting text from audio files. It's important to have a really good training set so it should be a diverse set of audio examples and corresponding transcriptions. The network will maybe extract features like pitch stone or speaker but it also depends on the architecture of the system if we are even able to look into this black box and then if it's trained you can deploy this system on similar tasks so similar audio and speech recognition network for instance would not perform well on image recognition and here to the right you see an example of the wave 2 fact 2.0 model architecture which was originally developed by Facebook and but you can download it on a hugging face and and use it yourself for your own applications. Then the next step is in our timeline the term algorithm I already explained. In 1763 we get to Thomas Bayes who introduced the ideas of Bayesian inference so what you see in the middle of this picture it's a formula and what's really important in this formula is that given a prior a belief of the current state if you do or you make an observation you can update that belief and that's a way of learning which is the basis probably for how we learn but also how machines can learn and then a next really important step I think is around the 50s much of the work done by Alan Turing who among other things introduced the Turing test which is a test where a person needs to find out if this agent with whom it's interacting is a computer or a human so the example is a kind of conversation where you ask questions to that agent and you have to find out by its answers if it's a computer or a human being. I say that now AI is so far that we're not able to distinguish it anymore from human responses but experts also have explained now well it's just a test of how well this machine can mimic human behavior it's not really a test that it means the system is intelligent like human beings. Well the term machine learning was coined in 1959 and what's important I think in the 1980s is that we saw a shift from rule based AI to statistical models which I will explain. So a very important breakthrough is that in the late 90s 80s and 90s the first convolutional neural network called Lynette was developed by the team from Young Lacune and it showed the potential of neural networks because it could do tasks much better than previous networks. At its website you can see examples here you see the architecture how the system looked like and what's important is Lacune proposed how a system should look like but more important he also had data a large set of handwritten digits on which to train this system and to show actually its performance and later on that inspired many people to also come up with different architectures that were even better and with larger data sets that were richer and could improve the tasks that were trained on much more. So another way to classify AI is not in weak versus strong AI but the whole collection of different approaches from classical AI based on rule-based systems towards machine learning for pattern recognition and then the current advances in deep learning where actually 90% of the design is done by the computers because the humans the programmers provide a general architecture but the system is self-organizing and what was really important was both Lynette the first convolutional neural network as alex net which was the first system to win the image net challenge so if we continue our very short history of AI one really important moment was 1997 when IBM's deep blue defeated chess champion Gary Gasparov which made many people realize the power of machines that were even able to win the classical game of chess but it was still using a rule-based model and then a really important enabler was in 2009 when image net was proposed by Fifa Lee and it's a free database of 14 million labeled images. This became a catalyst for AI development and for breakthroughs in computer vision and deep learning. I don't know about similar databases in audiology of high quality quality data except maybe from the UK biobank which contains genes so I think what we need is audio health net and that should be a rich database of much more than only audiograms of audio profiles which could become a catalyst for computational audiology hopefully in the years to come because what happened was that this image net led to alex net that won the image net challenge and you could say the rest is history but what we need is data of high quality because it's the data that makes the AI system. And then if we continue the short history a really important moment I think was 2016 when both Microsoft as Google claimed that their speech recognition systems were on par with human transcribers and this again was a breakthrough based on large data sets because Google had been training their systems on lots of the videos on YouTube. Then another important milestone was 2017 when the first transformer model was proposed which is the basics for the large language models that we are witnessing today and in 2021 DPG3 was launched which is basically the AI generation, a text generation system that well everybody now knows of and recently the wet swanipu and I were interviewed by Dave Kemp for the podcast this week in hearing where we discuss how this could be used in hearing healthcare and so if you're interested feel free to listen to that podcast. So what's computational audiology to be more specific it's based on algorithms and data-driven modeling techniques uses modern data collection tools to provide better clinical care. So my one sentence summary would be complex models applied to clinical care and there's many examples on the website computational audiology.com and also provided at the virtual conferences of computational audiology that is a series that started in 2020. So let's go through some of the examples in computational audiology and let's first take a glimpse on some of the approaches that already had been done in the past so you could say that audiology is very suitable for computational models and algorithms since it's very numerical and for instance in the 40s and 50s Greg from the Casey he collected data from measurements on cochleas of corpses and this way he unraveled the tonotopy in the cochlea and an example of maybe an algorithm is the now prescription rule that was already developed in 1986 by Dennis Byrne and Harvey Dillon well there's some examples of big data in in audiology like a collection of audiograms and a very important and interesting development is machine learning audiometry based on Bayesian active learning which I will explain in another slide in more detail and that we have discussed also at a podcast about Bayesian active learning with Dennis Barber, Joseph Schlitterlacher and Bert de Vries who were independently developers of Bayesian active learning algorithms to assess individuals audiograms. Also interesting developments are neural networks used to mimic human hearing as developed by the laboratories from Jos Mcdermid and Sarah Verhulst and if you want to learn more about that I invite you to look at the presentation given by Jos Mcdermid at the VCSA back in 2021 what was recorded and the upcoming presentation by Sarah at the VCSA 2023 in June this year. So how would this translate to clinical care and to helping people with hearing loss? Well the challenges I had are incredibly large there's more than 1500 million people with some degree of hearing loss and there's not enough clinicians, audiologists to help all those people and it's not only the number of people we need or trained professionals but also equipment so in short we need new tools we have to think about other clinical workflows and other ways to provide our services so for instance how can we improve the access to hearing health services well the penetration of mobile phones especially also smart smartphones worldwide is more than 80% so that means that almost anybody has access to a smartphone which could be used for instance with apps for automated audiometry for screening or diagnostic purposes or it could be the interface for remote care to do tests at home troubleshooting or it can maybe turn hearables into hearing aids and here you see references to scoping reviews that were performed last year to assess the lay of the land and what we see now that there are tools developed for remote support for instance this is data provided by Stefan Launer from Sonofa about the total number of fittings done in clinic which is gray and in blue you see the remote fittings so that's still a really tiny percentage so although we have the tools we are not yet able to apply it I think on the scale that we needed and here we can ask ourselves the question what are these barriers is it our own reluctance to apply these remote care models we have to think about how to steer this more positively and as clinicians I think we can take the initiative to improve this because there's a lot of advantages of remote care for yeah your own clinic and especially of course for for your patients customers and there's examples of specific groups which was provided by researchers from Sonofa that it can be more engaging for teenagers and that for instance girls tended to chat more with their audiologist via texting interfaces and something I can also relate to myself when we did remote care research in cochlear implant users that we used a messenger system which really provided a nice team a nice way to interact since the hearing loss was not a barrier and that is helping you in creating a level playing field and people also shared unsolicited information about maybe problems they experienced or things they desired which can be the opening for follow-up conversations or for better suggestions another important driver I think is well the carbon footprint of all our travels to to the clinic and also the lost workdays or school days for our patients if they have to take maybe a three day or many hours be available for a visit to the clinic and also for small questions or for quick follow-ups remote care can be really common handy then another interesting development of AI used in hearing health care is speech to text which probably by now everybody has experienced and which can be used in many day to day situations and it's a matter of time that it will be implemented also on glasses and be part of an augmented reality maybe if you want to say so and yeah I would like to share a really nice experience we had which was almost a touring game for hearing status I had the pleasure to interview Dimitri Konevsky from Google and Jessica Monaghan and Nicky John White from Null about all these breakthroughs in automated speech recognition and Dimitri he shared his lifetime work of improving these systems he has lost his hearing in early childhood and he needs systems to communicate so what you see here on the picture is actually the text of what he is saying so that it helped us to better understand his impaired speech while he could read the text the transcription of what we were saying and during this conversation Nicky lost the connection to her airports so she but she could continue the talk or this discussion by reading the live transcript and so Jessica and I didn't even notice it which I believe this kind of touring test for hearing status is liberating for people who now suffer from a hearing loss and skip group activities because they feel they cannot really be part of it so with these kinds of developments we can make group meetings more inclusive regardless of hearing status and here are some tips how to use this in your clinic so here you see on the picture an example of NullScribe which was an app specifically developed for audiology centers and especially in the time of wearing face masks where people could not lip read it was a really helpful tool that people could read what care providers were saying to them but it's important that you always try to keep the screen between the speaker and the listener so that also a lot of your expression on your face can be used by the listener and this is because that's one of one of the drawbacks that it's the meanest ability for speech reading if people are reading text another important factor is the position of the microphone on to have it of course near the speaker to optimally capture the voice and when you're in a group with multiple speakers or talkers you would like to have multiple microphones to best collect their individual voices so some of my patients also said that they were actually anxious to go from the virtual meetings again to the in-office meetings where they didn't have the advantages that ASR and also the separate microphones provided them and some general communication tips are of course to speak in a calm and clear manner so sometimes when I use these systems I get corrected because I see oops I talk too fast and if the system cannot follow it then probably the person that I'm talking to also have difficulties to follow me and another important check is of course to check if you're understood correctly and make corrections if necessary now then there's a lot of features that can be powered by AI so here I had some examples like noise reduction which will will go in through into more detail personalized sound profiles all kinds of adjustments that can be suggested by an AI agent when you're using a device and well here is also some of the limitations of these AI generated technologies because I wanted to create an infogram with pictures or pictograms of these different features and I specifically asked the system not to use any text but well you see it all over the place and another important development is that these features can be integrated in larger health applications so you're hearing eight might also count steps and that's a marker for healthy living and here's some more examples another example well this is the patient active learning that can be used for quick diagnostics to measure the audiogram quicker or maybe measure more so here is a an animation created by Joseph Lyttenlager here you see an an audiogram but then the axes are flipped so the normal hearing line is around here and what you see is that there's the estimation of the audiogram but you also see a measure of the certainty that the algorithm has for its estimation for its prediction and the more stimuli it provides the smaller this uncertainty gets which is really informative I think because often if people have performed an audiogram and they have some doubts and well they provide this as an it's maybe some comments but it's hard to to quantify it or to use it maybe a couple of years later if you don't know anymore the exact situation and circumstances when you performed your audiogram and these kind of tests can also used for other tests than the audiogram like loudness growth or dead regions in the cochlea Joseph Lyttenlager has provided more of these algorithms and I think it would be a really great breakthrough if we would be able to bring this into clinical practice then some remarks about noise reduction by deep neural networks or speech enhancement and there's just this paper released from Eric Healy at all where what they say is a key is the to efficacy is the ability of an algorithm to generalize the conditions not encountered during network training so what they have is a really large set of different noises and speakers and that's where they train the network on but of course somebody who's using these algorithms will in daily life encounter different voices different situations reverberation noise acoustics etc but what you see here is an example of a clean voice segment from the in sentences then how it's looks like when it's really noisy so really difficult to see the signal and again how it looks like after this algorithm has cleaned and enhanced it this was a attentive recurrent network I'm glad to share that we will also discuss this further at the VCCA 2023 and if I would think about potential future applications would be to tie these kind of algorithms to large language models because long large language models are basically predictors try to predict what will be the next word that somebody is saying which I often see many people doing who have a hearing loss they try to guess what's the next word so if a large language model would help you guess in the next words provided to the algorithm that's cleaning the signal that may might give even more benefits provided that these algorithms get fast enough and computationally efficiently enough that you can build it into an actual hearing aid another really cool example is virtual training so the group from Cambridge led by Debbie Vickers and and others they created virtual reality glasses that teenagers with bilateral cochlear implant could use and to improve their spatial hearing well it was presented at VCCA 2021 if you would like to know more and it's also been published now in front use and digital health or you could use these networks to simulate human listeners so maybe as a researcher or as a clinician you think oh this could be a good setting let's test it in this in silica situation and if it really gives an improvement then provide it to your patient and then this way you can test many more settings than is feasible in real life and then Meyer presented this at the VCCA 2022 then another really interesting development very recent from this year are the potential applications of AI chatbots in hearing health care so here's an example of how patients could use it for instance for screening by explaining the problems they experience and that this chatbot could assess if this person needs further testing it could help by providing education and support explain about maybe really difficult listening situations and how to make those situations easier by maybe reducing background noise could help also in getting people to the clinic providing follow-up reminders or be kind of assistance in tele audiology services okay so how could you use a chatbot in your clinical practice imagine you have seen a patient tested his hearing and this here this audiogram is the result what you know is that this is a older person with bilateral atresia so you would expect maybe maximum conductive hearing loss on both sides you provide advice and this person goes home and then doesn't remember everything you have explained during the visit and so what this person could do is just make a photo of the audiogram upload it to an AI chatbot maybe even a chatbot that your own clinic powers so that it can access also information from this patient and what you see here is what I would call then the input audiogram and what's interesting is this is a real audiogram from our clinic in Neimege using local conventions for for instance for the insert phones so these are symbols that I'm sure GPT4 has never seen before it's not trained on similar audiograms and let's test what the GPT4 makes out of this audiogram so here you see an application mini GPT4 it's developed for image classification and what I did I to the left you see the audiogram I uploaded it to mini GPT4 and then I asked this system please explain this audiogram to me what type of hearing loss do I have then if you look at the answer it's interesting that it says the audiogram shows that the person has normal hearing in the low and mid frequencies but has a moderate hearing loss in the high frequencies so that's completely wrong but if you realize that the green arrow or the green line that indicates the age related hearing loss just what we use as a reference in our audiograms then the system made a right description only not of the right audiogram and but for a first guess or a first attempt I would say it's not that bad it's comparable that with the answers I get from medical students that we explain about how to read audiograms and often they have similar interpretations so I see there's a lot of potential to further train these systems so that they instead of providing some guesses provide really good information that maybe we have clinicians have also validated or verified to make sure that we give the right information to our patients and so here's a first version of an audio or hearing healthcare chatbot it's called called Ellen and I'd like to refer to Ellen as a non-binary but that's something I'll have to get used to still why is this interesting well what's interesting is that Ellen has been written by GPT for itself I just asked GPT for please program for me a chatbot and this is what it provided and while I'm not a programmer myself so that's one thing I find striking the other thing is that yeah you can actually use it it contacts the open AI API and it will provide you answers when it's not related to audiology it should provide an answer like oh I'm an audiologist I don't I'm not an expert on that question that you asked me and I can say it doesn't work flawless but what I can imagine is that Ellen and I would form a kind of Centaur audiology team so that's human and machine together it's known from Centaur chess because in 1997 Gary Gasparov was defeated but in 1998 he held the first game of Centaur chess where teams of half human half horse or half human half AI competed to each other and in the first years of this combination the combination human and AI was stronger than AI only and I think in hearing health care we could make benefit of the potential of AI and of course our own strengths as clinicians so that brings me now to the part of the risks of artificial intelligence and here I state the artificial general intelligence I saw this picture which I found really powerful where you see AI to the right side and the nuclear bomb to the left side and a quote from Albert Einstein why is it now in the media well the godfather of AI and Joffrey Hinton just left google because of concerns he has for AI and I think in these discussions it's about this strong general AI and there's people who have doubts or worries about this development because they say this AI could surpass humanity in general intelligence and become super intelligent and how would we be able to control such super intelligent system or being we don't know of less intelligent beings controlling controlling beings of superior intelligence and I think that's a good risk to think about since well the nuclear bomb is something we were able to contain in the end but if we unleash super intelligent we don't know if we could get a genie back into the bottle again another reason why I want to use this picture is that to the left it's nuclear fissure but of course nuclear powers also include nuclear fusion which could provide almost an endless energy to humankind and the same goes I think for AI that we can use it for creation or for destruction and I'm sure I'm not able to cover all important aspects and I just wanted to show so let me take or share my two cents on this very complex topic and let me state first that I take a basic approach here that what I share here is my current belief but I'm happy to update it if new evidence and observations are shared with me but I believe that the biggest threat is our ignorance and how we can ignore the suffering of other beings an example being me buying petrol for my car knowing that it's coming from countries where maybe human rights are not respected and that actually driving this car has a very bad influence on climate change and there's many more examples of course a very easy one is Tom Lehrer referring to in this song once those rockets are up who cares where they come down that's not my department says Werner von Braun another way that where our ambition maybe makes us blind for the consequences of our actions and although 90 percent of people have good faith somehow we tend to choose the 10 percent that doesn't and that's how we are ruled by people like Mr Putin or Mr Trump and their likes which make a bigger threat for society for people with different thoughts or race than maybe the nuclear powers that we control or the power of AI that we may control in the future so let us be aware of these risks and think about how to control it there's one positive example I'd like to share and that's that when you lived in the 18th century you could know that if you buy sugar it would have been produced for instance in Jamaica by enslaved people and it was in the UK that women started to not buy sugar anymore it was the example of the first boycott of a product just because people wanted to not be complacent to slavery and here I think we have to think about rules or ways to make the rule of law such that it protects vulnerable people well there's much more of course to say about this topic but let's focus on AI and here in healthcare again and then we can look at the risk of narrow AI so I think this is a completely different order the other day I made this remark that for instance music generated by AI I don't think that will have devastating consequences for humankind and the same goes for applying AI in hearing healthcare of course we have to be really considering both the positive as the negative consequences here I believe that most of these consequences are comparable to a digital revolution that we are already in so what I stated here is risks that are tied to data for instance biases in data that makes our predictions based on this data wrong or biased or the violation of privacy because of collecting data that people don't give consent to for instance or the problem that we don't control the data well it's maybe saying something about us or it's affecting our lives and there's other risks of AI which is more about the responsibility like making a system that has positive applications but can also be used negatively like a lid reading system that I think can support communication between hearing impaired people but it can also be used for surveillance for instance by a dictatorial regime and that's also the question of liability and therefore it's really important that we come up with oversight and regulation here is I think a challenge for our professional societies and for us as professionals to not to do nothing I think it's important to experiment and to commit errors but learn from it and improve the system what I would like to share therefore is this mindset of infinite games because I think this world where we live in is or can be regarded as an infinite game where there are no clear rules and there's no winning or losing but the only objective is to keep playing healthcare is an example because there's always people that suffer from sickness and the only thing what we can do as actors in this game is we cannot define the rules but we can define the rules we are playing and also see how we respond to new players that join for instance in healthcare let's google or maybe new colleagues that that join us so I presented this before as the game of computational audiology and of course don't take this too seriously but I think it helps in taking a mindset of thinking of the consequences of our actions and for playing this game it's important to have the motivation and the resources to do so and as clinicians we should think about maybe pooling our resources so that we can compete in this very large field and in here again I reintroduce Alan as a virtual audiologist with superpowers and wireless capabilities in the heart of him in the middle you see the public domain mark which means that there's a history of known copyright restrictions in the endeavor in the public domain and he's a tribute as I said to Alan Turing who played a crucial role in cracking intercepted code coded messages from the German enigma system which enabled the allies to defeat the access powers in many crucial engagements however Turing was prosecuted for homosexual acts after the war and received chemical castration and only after that the UK government apologized for the way Alan Turing was treated so what is the role for audiologists, clinicians and professional societies I think we have to anticipate the developments and maybe the scenarios that can play out develop a shared vision of in what direction we want to go and what we can do is share our best practices and maybe also our errors and think about how to share data so that we can also better share or learn lessons and do realize that this game of audiology has no boundaries so what players in Europe or the US are doing is affecting players around the globe and so we cannot get out of this game so I would suggest that we think about more open systems for exchange of data because you can imagine it you see a person with mild hearing loss today and that person has measured his hearing loss using automated audiometry and is now using an OTC for a first kind of rehabilitation but 10 years later maybe that hearing loss may have been progressing and maybe a clinic would be the best place to go to for further treatment would you be able then to also use this data that was collected before in other systems and other clouds and use that to predict the future hearing status I think the only way to do so is creating more open ecosystems based on fair principles to make the data findable accessible interoperable and reusable and here's an attempt to share resources where I mean research software but also clinical tools and best practices so that we can inspire peers increase transparency and learn from each other and build a community where we try to learn from each other's tools and maybe facilitate cooperation across centers because only then can we play a role on an international scale so you can scan this QR code to get to the website where there's a list of resources and models and feel free to reach out to me if you have an interesting demo or tool that you think could also benefit your peers and then I'd like to summarize a short list of recommendations so think of the care you provided in an infinite mindset way not on the short term but also think about this long-term consequences and then you may realize as I or maybe agree with me that we need AI automation so to assist us so that we can serve more people with hearing loss to improve access to hearing health care and a really important facilitator is open standards and ways to exchange data and let me remind again that in computer vision it was the image net that made the breakthrough possible and I think if we create an audio health net with really rich data in it we can further improve AI in hearing health care or computational audiology and hopefully have benefits of interoperability and our collaboration and of course we should not neglect some of the risks that I mentioned before of big data and AI technology but it starts that more people are understanding that it's the data that makes the AI and that therefore we have to be open to be transparent and to be able to mitigate risks if you want to learn more about these developments and the developments that are really currently happening please join the PCCA this year it's June 29 and June 30 it's more aimed or tailored towards researchers and I'm really happy to share that we have excellent keynotes and featured talks and also a lot of nice submitted talks that you can follow most of them will be also recorded so that you can look back if it's not in your time zone Professor Malcolm Slaney will tell share his experiences at Google's machine hearing research group Professor Julian Wang will tell us more about deep learning and how it can be used in speech enhancement while Professor Sarah Verhulst will share her insights in how deep learning can be used in computational auditory models and in mimicking humans and also improving hearing aids and other devices so if you would like to know more about computational audiology there's lots of resources there's previous presentations at the VCCA like for instance the keynote from 2021 by Joss McDermott who tells more about new models of human hearing via deep learning there's the podcast from the computational audiology network computational audiology television and if you scan this QR code you will get to computational audiology.com where we have as I said before collected a lot of resources and interesting models to learn about and so feel free to explore and have fun by getting to know more about the potential of AI in audiology lastly there's this great presentation from Martin Garner about deep learning without a PhD where he really shows how to train deep learning systems for instance on this application of the digits recognition I think it's three hours in total and really awesome if you want to know more about this so you can also find it on on the website or computational audiology.com or just follow the link below or google tensorflow and deep learning thank you for your attention and happy to address your questions and of course you can ask Ellen for any question related to audiology or email Ellen or me and looking forward to connect and learn with you and see what the future will have in store for us.