 Hello to everyone, I am Lorenzo Piccinale, and also on behalf of a large number of co-authors, as you can see in this slide, I'm presenting our contribution, which has got a very long title, Designing the Bears, both the virtual reality training suite for improving spatial hearing abilities in teenage bilateral cochlear implantees. Now, let's start by looking at the context of this work. We know that bilateral cochlear implants are offered as a routine for children and research has shown that bilateral implantees when compared with unilateral ones are better in language development, spatial listening skills, and speech and noise understanding. At the same time, these skills are still far below those of normal hearing children, and possibly even more importantly, there are no accepted standardized protocols for the fitting or rehabilitation of bilateral cochlear implant users. So considering this context, a new project has started, funded by NIHR and coordinated by Cambridge and the Gaisen Thomases. We'll see later a bit more about the whole team. That is called Bears, both ears project, and Bears aims at using, exploiting auditory training in order to improve spatial hearing skills specifically for bilateral cochlear implant users, and specifically looking at three areas. The first one is sound sources localization, so the ability of localize sound source in the surrounding space, also at different distances, for example. Speech and noise perception, so the ability of understanding speech in noise, but with a specific focus also on the spatial attributes of the scene, so where, for example, the noise is not located in the same position of the speech sources. And music listening. So this is the Bears project in general, but what are we gonna talk about in this presentation? This presentation will focus specifically on an action research protocol we have put in place in order to develop our games or our VR applications. Here you can see two pictures, for example, of a workshop we ran a while ago where we involved directly all the stakeholders and tried to design and develop some of these games. So let's talk quickly about action research. Action research seeks to transformative change through the simultaneous process of taking action and doing research. So research with, rather than research on people. Now I am an engineer and in engineering we often refer to as participatory design. So the idea of designing, attempting to actively involve all the stakeholders. And now this term stakeholders in this specific context involves a lot of people because it involves obviously the directly involved one, the ones that are directly involved in the development like engineers and developers, but also speech and language therapists, clinicians, music therapists, audiologists. But at the same time also involves teenagers and young adults with a cochlear implants and parents, friends, teachers, people working and living with these people. So all these groups of stakeholders could potentially be involved in the design process and the development process. And the benefit is clearly to design and implement applications that are potentially more fit for purpose, more usable and ultimately potentially more successful. So let's have a look at the cycle, the action research cycle within the Bayes project and this stage of the development. Well, first we have the item of designing and developing applications. And then stakeholder involvement. Once the applications or a certain stage of an application has been developed then we involve stakeholders through focus groups, surveys, interview, or maybe some people take home the application and play for a certain amount of time. After these stakeholder involvement items then we go into the creation of wish lists that then are translated in user requirements that then dictate further development and so on. So we go through this loop of development, involvement, requirements and development again. But it's not as simple, unfortunately. We have some intermediate steps. For example, from the development to the involvement of the stakeholders we need to produce mock-ups or if not available maybe in very early stage of the development we need to produce these videos or usable prototypes and et cetera. Then from the stakeholder involvement to the wish list we need to summarize and itemize the feedback which often comes in different forms. And from the wish list to the development we need to convert this list of items into requirements for the developers and then we need to prioritize the development requirements because obviously often you have too many requirements to what you can indeed address. So it is a complicated action but the concept is this loop of moving from one stage to the other and then going back which is the key of this type of research. So what are we developing? Now, first thing is that this project was based on a previous project called 3D Tune In and based on the tools developed on this project. In specific we are using the 3D Tune In Toolkit which is a C++ library for sound specialization and simulation of hearing loss and hearing aid. We're using Musiclarity which is an application that was created for facilitating music listening for hearing impaired and hearing aid users. And finally we use a research tool with the value which is an HRTF adaptation game as you can see here is basically a shooter game that has been designed for training specialization skills. And what are we aiming to create within the Bayer Suite? Well, three applications specifically looking at the three areas that we have outlined before. So sound sources, localization, training app, then a speech and noise understanding training app and a spatial music training app. As you can see all three focus on the spatial element of sound but on three very specific aspects. Now, I start by giving you some examples for example of the process that we have already run so far. This is an example of what I mentioned before going from the development to the user involvement. What kind of information we can share with the end users. And this is an example of the video of our speech and noise training game that we shared in order to gather some feedback. I'll play it back just for the few seconds so that you can get an idea. The Bayer's project is a series of training games aimed at improving the hearing abilities of children and teenagers with bilateral cochlear implants. The speech and noise game specifically focuses on the ability to recognize speech. The game in its current state involves the cafe scene where you play the role of a counter server. Several customers will be placed in front of you one of which will then ask to make an order. Can I order something please? You will then be required to identify which of the customers is asking for service after this. So as you can see, this is a sort of a guide through a walk through the gameplay that we shared with the end users because we didn't have a prototype already available. But this was enough for them to start understanding what we were planning to do and giving some feedback. Another example of our loop can be made by looking, oh, sorry, he's not moving forward here. Looking at a technical question and matter. So we started by delivering this game using head mounted displays. And during the process and the evaluations that we run with the stakeholders, we understood that these devices were not suitable for everyone. And specifically for those users that had some balance problems and were not comfortable with virtual reality. And this is a good example because it's straight away in a relatively simple manner, directed us to use also another approach. In this case we use a tablet that doesn't need to be held by head mounted display and can be held in front of the user as a window towards the virtual environment. And this simplified a lot the process for those people who had problems with head mounted display. This is I think a good example on how simple things such as this can be picked up early enough in the development process and addressed with. Another example is the scenario. This is unfortunately something that we didn't yet solve because as you can guess, we are specifically targeting teenage cutler implement users. But also within this limited population we have so many possible requirements of scenarios, characters, like context. And so it is being very challenging to try to understand what we can implement and what we'll need to implement. Obviously we don't have the freedom to develop an application for every single person. But at the same time, we can design our applications so that they are highly customizable. And this is exactly what we're doing. And this is exactly another good example of something that we discovered by doing our participatory design stage. And finally, an important item is the audio playback. Talking about people with cochlear implants. Are we going to use headphones that go around the cochlear implant and they're connected through a wire to the head mounted display or are we going to use wireless connectivity? This is another important question we're trying to understand and it's not easy because also in this case many people, many cochlear implant users use both or one or the other solution. Therefore it's not really, and there is a wide variability of devices that we need to deal with that might not be compatible with certain types of streaming. There is latency in streaming, but at the same time, it is potentially better than relying on headphones which might have some sort of problems by sitting on the implant. Therefore this is another example of a challenging situation that we have identified and we need to deal with. Now very briefly, looking at the timeline of the project, we are at the beginning at the moment, we are going through the series of prototypes and user involvement between 2021 and 23 and after which we'll go through the clinical trials. So this is a little bit of a picture just to get an idea of where we fit within the project. And finally, I close with this final slide on some pictures of the team, of the research team involved in this which as you can see is very wide, includes principal investigators, PhD students, postdocs and et cetera. So I think this was my finish, my end. I hope you enjoyed the presentation and you'll have time to ask some questions. Thank you.