 Welcome back to the last talk of the RC3 this year, Peggy Seileb with Hearables, a citizen-signed open hardware project to hearing aids, Hearables and the DIY hardware and software behind it. So go ahead. Thank you very much. Hello, Eepang already said it. My name is Peggy Seileb. I'm a computer scientist with master of public policy and media artist. I want, I'm currently working at Fraunhofer IDMT Oldenburg in the research group, personalized hearing systems. I want to introduce to you the like-to-hear-self-adjustment open source mobile hearing aid prototype which we developed in the citizen-sized project here, how you like to hear from 2017 to 2020. So I'll give you some overview about the motivation and what we developed and the open issues. So let's let's start from the motivation point and this is all about hearing loss and when you realize that actually 17% of the population in Germany has hearing impairments that that's actually could say more or less it's the whole population in the world has similar percentages. There's some sources saying like slightly different numbers but it's more or less you can take it for all over the world. Now in Germany we actually privileged we have access to hearing aids and hearing supports but when you realize even in Germany about 75% wouldn't use hearing support to to get a bit better hearing possibility. That means I mean this is very optimistic but when you make a number game for example for Europe and we have 70, 740 million people living in Europe and that means that actually 52 million people in Europe have untreated hearing impairment. That's a huge number and even when you were living in a country like Germany and people have access to hearing aids, nearly half of them would return them after their first fit. They get it from the audiologist, try it out and say no I don't want it give it back. I don't know if you do is the same thing when you're looking for headphones that like half of the people just give it back but hearing aids with hearing aids is like this. So something is wrong with the idea of hearing aids and the way it's adjusted. So our assumption was we want to go behind what's actually wrong with that idea of hearing aids why do I don't meet like customers needs. So the first thing what we said is our assumption that adjustments in audio laboratory is for most people unsatisfactory. I mean I myself I familiar to moderate hearing loss and I tried it myself and I had the same feeling but it was now initiated this citizen science project here in 2017 to find out is this just me or everybody thinking like this. So I took this format from sound art sound works like actively listening to surrounding sounds and we had this hearing aid mobile hearing aid prototype with some intuitive like GUI to interaction for interaction so people could easily adjust surrounding sound and we made this sound works. People were adjusting the sound in the ways like they like to and we got about one hour locked data of people's behavior and we did also ask people what do you like from earrable what do you want from a hearing aid and we got 550 submissions and we made two hackathons actually the first ones make done at Fraunhofer IDMT and about 200 people took part. So if you're interested to get more about this project behind you can look at this web page and yeah actually it was the first citizen science project the first open source project and the first hackathons running at Fraunhofer IDMT Oldenburg and it was quite a new format for science and was very experimental and very actually exciting to go like out of laboratory and go into like real life situations and what we did we took like more or less three sound situations one on streets as a one like in cafes with a lot of bubble and other and also going around with in parks and just let people like we had this mobile hearing aid prototype with some headphones and let people just adjust in the sound like they wanted to without making audio ground without measurements and people with hearing hearing problems people without hearing problems all age gender everything just let people adjust and see what comes out is there any like something advantage just to give the adjustments of hearing aid prototype in the hands of the people and not to the like algorithm or to audiologist and one citizen scientist who's still active and actually using so hearing aid prototype in everyday life is Dr. Udo Spiegel who has a severe hearing loss and had really big problems in normal life to communicate at all. He gave me this video message I want to play now hopefully it works. Here we are. My name is Dr. Udo Spiegel. I'm a professional engineer in my working life. I've done a lot of technical developments and then in my current retirement age I've also made further technical solutions. Unfortunately, I had to quit because of too bad communication with my partners. My hearing aids acoustics have given themselves a lot of effort and also showed a lot of patience in the search for a suitable hearing aid and with the right settings to understand sound and language. Unfortunately, that was all without success. We didn't find a suitable hearing aid with which I had achieved a somewhat good language understanding. With the like to ear box I have very good experiences. The first was that a good language understanding was available and especially impressive was the natural language of the Gesprächspartner. This shows very good the potential self-adjustment can have just by like optimizing the sound of the hearing amplification of the sound amplification in a way that perfectly fits to its needs. It enhances his possibilities of hearing and understanding much more. I want to explain to you how we made this hearing aid prototype. We have this like to hear framework which is just what we developed for the citizen science project which is actually just the control and easy and intuitive control for the hearing aid prototype for everybody, even not like very tech interested. And then we had this basic hardware set up. We took over from the open source mobile hearing aid prototype by Professor Dr. Marc René Schädler who made it based on Raspberry Pi and some other modules I'm going to explain now soon. And he made also some basic like adjustments to use it for actually for him to use it for his students for audiology but we could also use it for our purpose. And this and the control itself is of the hearing aid algorithm is the hearing aid algorithm itself is actually processing a platform or the open master hearing aids or open MHA which has some the possibility of like research and development of new hearing aid algorithms which is also actually more like an academic platform but I'm going to explain this a bit better later. So about the hardware, this is actually basically the hardware you need if you want to buy and build it yourself, need a strong battery and about three ampere otherwise wouldn't run a Raspberry Pi and a sound card and we use these headphones which are quite expensive but I think you can also use a bit cheaper ones but not too cheap microphones otherwise you won't be happy with that and we have this pre-amplifier for the microphone signal which is a kind of tinkering thing that's why we made a special for now but all in all it's about 200 euro and I think you can spare a bit if you if you change headphones and mics and yeah we made this pre-amplifier special for for now for the C3 so we have this SMD we made some of them and already done and you can order assist just get in contact to me I have a few I mean I made a talk already yesterday and I got I think three are left so if you if you're interested just contact me and I can give it to you so it's a bit easier to to get it together you don't have the solder it yourself um so for the usage of the hardware and the software from from the user side it looks like that there's this box and you had your smartphone you could use as an as interface you would just like log in in the Raspberry Pi as an access point and call the web page and the web page will give you the possibility to control the sound I can explain this soon and the sound control will be processed by the Raspberry Pi and the change sound then will be sent out by the headphones to the headphones and we made also some logging of the surrounding audio and also the presets the people would choose so we could have a look afterwards if there are any patterns of how in certain situations how people react on sound and what kind of presets they would choose we had some very interesting and inspiring ideas out of this but I don't want to go too deep into this in this talk if you're interested for the data analysis just get in contact to me with me so for the control to explain it how it works is really very simple you had this like circle you look to the to the right it would be the control page you would get and if you would go up this circle you would have more volume and it would get louder and you go down it would all over again would go softer and for the access access you have a sound balance you know I mean if you go to the right you would have a brighter sound if the more you go to the left you would have dollar sound that's very easy and by every every single point of this of this good graphical user interface we will have a certain sound setting I can explain this later to you and so basically for the web page we had this landing page and then you would like fill in a user id and start you have the submit button and then you have beside of the control you could have reset for resetting recalling the app and also an off-putting for amplification and we had this control gs which is a simple java class from control and web socket communication so for the objectives of the self-fitting control it should be like nothing special like common python models plain html and javascript easy to understand if you want to like reprogram it understand how it works it should be independent of third party competence and yeah open source for the participation of citizens so to look a bit into the software it's actually this like to hear controlling framework and interaction framework and which would control the hearing aid prototype you know you remember this is like where the hearing aid algorithms are implemented on raspberry pi and that means that there was done some hardware calibration and this software set up that it would run on raspberry pi and some basic open image aid configuration with dynamic sound compression and feedback reduction that's all what we used for now open image itself has much more possibilities but actually it's a basic hearing aid has basic hearing aid features and is like i mentioned before research platform for novel algorithms and provides this tcp ip interface with as possible to make the web app and it's yeah once you get behind it it's quite easy to configure and for to run on on raspberry pi but you could also run it on any other like linux system you have the connection to alza and and the transfer audio between the application by the check audio connection kit so this is basically the architecture and then to go a bit into the amplification we use it's just very simple amplification over all frequency bands so when you would take the circle more to the left side which just like linear amplification in same amount over all frequencies and if you go more to the right that would be it mean you would have more amplification on the higher frequencies this is about the sound balance and if you look for the overall gain it's like when you put the circle up and down the y-axis then you would see that you have some more you go up some more amplification you have but in some in some more likes let's say more volume at a certain point you wouldn't put so much amplification on it because it would get too loud and that's why this d-point you see and so it's a grid of 10 to 10 presets you automatically a chart then actually the overall frequency amplifications and like I mentioned x-axis is sound balance y-axis is overall gain and you have a bit of compression of high input levels and for the frequencies the processing was about nine bands and the audio we did also the audio logging it was about six bands yeah the presets were based on two channels but we put actually same amplification on the two of it as sort of like mono and nine bands and we had gained presets for 65 different input levels and yeah a bit of compression and a step size of two for minimum input gain of throw to get a bit into the software we use this Python modules and had this main who is connected with the user interface and this would put the information of the user interface to the control like via control Python script which would take the the presets out of a lookup table to gain station on one side on the other side lock the the presets and the audio level analysis to with chasing lookup python and would also control the open image a the processing chain of the sound amplification so yeah if you look a bit you see it's actually basically the same thing here the main is in in connection with the server oh yeah with the server socket and would also communicate with the control script and then do the chasing and there's a logging handling so that's basically how it works and yeah the socket we have a socket the control chair as for the socket and web inter sorry web socket interaction and we are the sound former for controlling 2d interface elements and the menu for handling of the switch and reset you saw before on on the user interface yeah this is how it looks like with the chasen we we're logging we have the state we're logging we have we will see if there was a reset and the on off and then like the where the circle like the preset um position of the circle was set so that's basically the state of the art at the moment and so what's our next step um let's listen to the next citizen science who took part at the project and he's going to give us some inspiration yeah looking to let it run yeah Hello my name is Jorge Curiti I came from Brazil and I have a medium hearing loss and also living in Berlin without a public health insurance only private ones so I cannot afford a hearing aid kit I took a part of like to hear a project testing the prototype for about like 10 days and I just experimented some situations for example giving a class for one person in a closet place pretty quiet it was really really helpful I could understand even like the person who was not like with a good distance but normally I didn't hear but for example in place for for for example like a bar with a lot of people talking a lot of noises and different distance like uh it was quite difficult to to understand when the person was talking to me directly and I think a smaller prototype like a smaller version will be really I mean like physically speaking it would be really helpful for me because like to Karen like at the big box and also like we do wires of the the headphones it's it's not like so practical so that's okay thank you bye so yeah there's some open issues and uh I actually want to call for support um it would be cool like he mentioned that would just be a bit smaller and we could even even smaller spray pies or you could use microcontroller so we have also some ideas to make the microphone pre-amplifier a bit smaller uh we had the 3d case but with some adoption but you can find the stl file on on github if you like to and yeah it would be also more economic and but better hearing when you when we use uh true wireless headphones so anybody wants to try it out just let me know what uh what your experience is and yeah for the web interface update uh I it is uh necessary to um yeah it would be good uh to have an update of uh the open of the operating system to raspberry and buster and better a more robust browser communication and this is like some basic things and it's not so much work to do and uh and there's some other things like nice to have more expert mode of adjustment enhancement and a bit better not a sound suppression and what would be really cool would be more feasible sound processing in real time and uh for example by uh Faust programming and which but if you're interested just have a look at the github repository and you find more information I can give you some hints for uh cool uh papers so like the first one is more or less like uh benchmarking with some actual hearables who have these features integrated and uh like uh and the next one showing something about uh over-the-counter hearing aids which are like for that made for self-adjusting with uh otologies and the last one is actually saying where uh our idea of this 2d touch um interface is coming from it's actually my group my colleagues and the frown over idmt so thank you very much for listening um if you want to support get in contact with me and we have also some uh social media and uh accounts on twitter and uh instagram you can follow if you want to uh keep being kept updated and yeah thanks also for uh bmbf and for supporting and frown over idmt thank you yeah thank you very much uh pegy there's no questions in the chat right now anything else you want to add to the talk um um no okay but i i'll say uh thank you because it's it's just what we're supposed to do more open source uh hardware open uh source software and uh yeah if the citizens can do it themselves why why not and uh maybe the the cooperation between the the pros and the people who needed can be yeah can be better that way thank you very much thank you too for giving the chance to make this project public yeah all the best for your further updates and i'm sure we'll see you again sometime and uh until then yeah check out the the uh social media or the the pages where you can find the information