 Now everyone knows about 3D movies and gaming, but not all of you will have experienced 3D sound. And I don't just mean stereo or surround sound, I mean audio recorded with ears in the same way we experience sound in real life. Again, I'm here with the audio experience design team and Cat Paul from Imperial College London, and I'm all ears to find out what you have been doing. So Cat, what is this all about? So essentially we want to make sound, sound immersive. So when we're listening in the real world, like me standing next to you, you use your two ears together. And so if you have a sound that's coming to the right of you, it'll be louder in your right ear compared to your left ear. It'll also arrive at this ear quicker than this ear. But what do you do when sounds are coming in this plane? So it's always going to be the same. This is when we use the shape of our ears and sort of the head and our torso to really help us figure out where sounds are coming from. Because if a sound is coming from below, it'll bounce around slightly differently than if a sound is coming from above. But we don't have the same head and the same ear shape. And so if we wanted to make a really immersive spatial sound scene for you, then we want to be able to replicate how sound bounces around your individual body. So the way that we do that is we take a 3D scan of your head and your shoulders and then we simulate with some physics models how the sound bounces around for you in every single position. So then we can create a sort of sonic fingerprint for you and then we can apply that to any sound that we record and say what position it's coming from. That is brilliant. Now I am a sucker for an immersive experience and you're telling me that you can scan my ear so then I can get an audio experience that is not only immersive but is super personalised for me. Yeah, exactly. Let's do it Kat. I am on board. Where do I need to go? What do I need to do? Come round the side into this booth and this is where we do the ear scanning. Right. And I sit on this stool, right? Yes, so if you come sit on this stool and so I will just set up the scanner. This is the scanner that we use and it uses a flashing light and so if we present the flashing light hopefully you can see my hand on the screen. That's great, yeah. So we're going to do exactly the same for your ear. So if I can get you to face forwards and I will come round and if you can close your eyes and I'm just going to move the scanner around your head and build a 3D model. Okay. Eyes are closed. I'm ready. There you go. So yeah, as I move it around it tracks where it's scanned and then I can angle the light and then we can get a good picture of your ear. Okay, so you're just moving it across my ear now. And so the light shines in every sort of crook and canvas and it sort of builds up a 3D model of your ear. Every crook and canvas. Yep. Oh no. And so I'm going to stop now. Okay, and can I move? Yeah, now you can move and now it's all done. Okay. So... And is this it processing the data? Exactly. It's processing the data. It's taking all the sort of... It builds little points and then after that it measures it together into a nice 3D model that then we can either look at or we can print it out on a 3D printer if you really wanted to. So... Amazing. Oh! Okay, let's go down a little bit. Oh, I'm a bit nervous. This is my ear. Like, I have never seen it. And it is... Oh my gosh, the details and like the ear piercing. You can see so much. I've never... It's a part of your body that you just never see. You never get to see yourself because it's on the side of your head. Oh my word. That is absolutely fascinating. And you use this to, I suppose, reverse engineer the immersive experience to make it personalised. Yeah, exactly. We can just simulate how the sound bounces around in your ear and then create that thing approach for you. That is brilliant. Roma, I do hope you are listening to all of that because no doubt I will test you later but for now it's back to you. Oh Fran, I don't like having tests but I am thrilled to have a psychoacoustic specialist, Lorenzo Piccianalli here to tell us a little bit more about what Fran just did. Lorenzo, welcome. It's great to have you here. Thank you very much. What is 3D sound? 3D sound or immersive sound is effectively what we experience in everyday life. So you walk on the street and you hear sounds that come from different positions at different distances in different directions. They move differently and all these attributes about the spatial nature of sounds or the position and etc is what we generally refer to as 3D sound or immersive sound. Now normally with these two expressions we refer to us simulating this in a virtual environment. An idea could be when you go to the cinema and you listen to this surround sound and double surround and there you have a lot of different loudspeakers around you that reproduce sound and you perceive a sound coming from different directions. So that is the concept. But you need lots of speakers in order to achieve that so far? Yes, so I would say that the majority of the techniques also at home have multiple loudspeakers. Now what we are trying to do is starting from the point that we actually hear everything from two years why don't we go directly to the two years maybe with a pair of headphones and try to trick your brain to believe that there is something in a position where there is not. Fran's just had her head scanned. Why did she do that and what's going to happen next? Now in order to run this simulation and to really trick people to believe that there are sounds that are not we need to take into account certain morphological features of our ear and head so our head, our pinna which is this part of the ear but also the shoulders the fact that for example I don't have much hair but I have beard all these things are going to influence the way that sound is modified by these features and the way my brain understands where sound is coming from. So you and me will hear things quite differently. Yes, definitely. Of this stuff. I have a very big head because I'm very intelligent. A lot of brain to be kept inside and therefore I will have bigger differences in terms of for example intensity in the two years. So we scan the ear, we scan the head and then we use certain mathematical methods in order to find out how from a given position in space the sound is modified up to the entrance of your ear canal and eventually well even now we are starting training neural network models so artificial intelligence in order to be able to predict what is someone's own signature from less and less data eventually could just be a picture or even it could just be some data that you input into the device. So trying to make it easier to basically figure out that person's individual characteristics to give them the sound based on all this data that you're collecting. Exactly, to make it much easier than it is at the moment and then we can do at the moment. So I've got some headphones here should I stick them on? Is that the plan? Yeah, let's have a go. So this is just some of the demonstrations that we use which are actually available for who visits our stand and they are available also online. Headphones are important because we need the left signal to get to your left ear and the right signal to go to your right ear which doesn't happen with a normal pair of loudspeakers. If you want to listen along pop your headphones in and you'll be able to hear what I am about to hear now. Good morning, I'm here in front of you. To me. Or maybe I'm going to see a ride. Or again, I could be here. Or here. Or here. Okay, that's weird. Or back in front of you. That was a particularly strange experience because I was listening to your voice. My voice whispering and talking around. So it was like you were on different sides of me but also closer and farther away. But clearly I could see you there so my brain was a little bit confused. That is what often happens. Unfortunately, but sight often takes over. This is why it's often difficult to have a clear sound image that is frontal. When people experience this, they often hear sources on the back because they can't clearly see something on the front so they believe that it is on the back. Closing your eyes helps in this case. Yeah, maybe. A bit disconcerting. So is this going to be a big feature of the entertainment industry? Well, it already is to a certain extent. I mean, Apple's latest products, their ear pods, they already have some of this technology. Now, we don't know exactly what they do but they do something in this direction. And it is possible that now with Dolby Atmos and all these new things, both from the audio visual and the music industry will allow more of this technology to be available. It's also true when I started this research in this field 20 years ago, headphones were not as widely used as now. If you go on the tube, more than half of the people are gonna have a pair of headphones. I had a student who had a pair of headphones that was even connected to something was just a fashion statement. So people have headphones and if you have headphones, then we can do this. So hopefully they're gonna become much more, it's gonna become a little bit much more mainstream and more integrated, for example, with video game experiences and other things and it might become available soon. Oh, that's very exciting. What about outside of entertainment? Yeah, outside of entertainment, this is actually the area where we have done more work. So applying this technology to create impact in, let's say, non-leisure activities. So we have used these, for example, for developing auditory displays for blind people. But more recently, we have worked a lot with hearing aid companies and actually one of the other projects, the other project that we are showing here at the Royal Society is called BEAS, which stands for both ears. And it involves the development of the series of video games and virtual reality video games where we employ these technique. What we're trying to do is to replicate what happens in real-life scenarios, specifically in difficult situations where you have, for example, multiple tokens and a target that you wanna listen to. And we are planning to use these games for children and teenagers that had bilateral cochlear implant. So when you have these sort of digital ears that are surgically implanted, you have both ears. The two ears might be rather mismatched for many reasons. So not only maybe one ear is louder than the other, but you might also have a different pitch and slightly different timbre. So often the brain is not really able to fuse the two together. And so we are using a series of games that can be performed only by using both ears at the same time to help them train to use the two implants to remap their auditory system so that the brain is able to use these altered cues more successfully. I always love a piece of engineering or science that has entertainment value, but also medical applications. So thank you so much for joining me, Lorenzo. It's been great to hear about your research. Okay, thank you very much. Now, we're going to take a short break. When we come back, Fran will be going on a special mission deep into the archives of the Royal Society and one of its treasures.