 So here you're showing 3D audio, so what is that? 3D audio? Yeah. It's actually immersive audio, which means that you're enveloped by sound all over. So this is an iPhone, right? This is a normal iPhone, yes. And their headphones are normal? You can buy them in any store. So what do you do with the audio? What happens? What happens is that we are using a normal technology, which is everywhere on mobile, which is everywhere on a set-up box or in a TV. And we use surround HD. We put additional to a small amount of data, which is 2 kbps inside. And all of a sudden you're enveloped by sound with a 4MOS format, NHK 22.2. Of course, you could use any other 3D format. So just 2 kbps? Yes. And it says here, so you are doing 4K, it's with 4K for example, or 8K, right? Yes. It's just the audio. This is actually a production by France Television, which is in 4K. And they have also mixed in 22.2 the sound. Now imagine you have 24 channels, how you're going to transport that, it's impossible. So only 2 kbps and what else, compared to normal audio? We use normal HD surround, 5.1 or 7.1. And then we use our famous 2 kbps. And this actually transforms the whole system by means of mathematical models, which have been standardized as international standard ECMO 407, to any 3D format you want. You may now ask, how does so many loudspeakers come into an ordinary headphone set? And the answer is very simple. You have an artificial head, you put into a laboratory, and then you measure the response of each loudspeaker, and then you do a convolution, which is a sort of a multiplication of each signal, and you do this 48 times in this case, and all of a sudden you have 24 loudspeakers virtually on your headphones. And that's just amazing. So just normal headphones, you feel like you have 24 on each side? Yes, you have actually developed about 24 loudspeakers all over the hemisphere. So is this your name of the company? This is the name of the company. So you're from Switzerland? Yes, exactly. So where are you based? We're based in Engadin, Krabbenen. And the company is located in Marj, near Lausanne. So you're doing something with an EPFL? Correct, we have a CTI project, which means it's a nationally funded project in order to actually develop software for this standard. Is this the standard name? ECMA? So are you the first ones to show this? I'm actually the inventor of the technology behind ECMA 407. So when did you invent it, and how long has it been in development? It's actually a dozen years project, which involves several patents, several discoveries. I'll just give you three examples. We have actually moved astrophysics to audio, which no one dared to do before. Astrophysics? Yes. These are so-called inverse problems, which is a horrible word. But behind is the work of Viktor Hambatsumian, the Russian-Amenian astrophysicist. Working on what? He was working this time on atomic states. And if you go to higher order, which means that if you wanted to, for instance, add an additional dimension to a mathematical system, you have the problem you have no data available. How you, for instance, can convert an ordinary photography into a 3D image? That's exactly an inverse problem. So is it simulating the experience? How in your ear can you feel you have 24 hours? Because it's only one place the speaker comes from, right? Correct. So this is done by means of these measurements with this artificial head, in the case of mobile phones. And you have to understand that the whole audio world, even when you consume on YouTube, is based on something which is completely only located in our brain. If you take two loudspeakers, for instance, headphones, and you put two slightly different signals on it, which, of course, is the same content, all of these are new perceived sound sources, which are located also between two loudspeakers. And this is called phantom source. So what we do is, essentially, we have a model which recreates these phantom sources, though we compress to these two kilobits per second. So somehow each ear gets the sound slightly different time or something, and then you feel that it's on the right or something like that? Yes, it simply means that you have differences in level, in time, in frequency, and this all gives you an immersiveness which realistically would be there. So in this scene, for example, where would you want to be located in the scene? You want to be sitting in front, or how does it work with the positioning of the person in the space? If you look, actually, you can see the microphones, which have just been placed. The picture has changed, but you could see a beautiful combination of surround and ultra-high definition setup, which has been done by France Television. Look here, these are spot microphones all over. And France Television actually had the Angina's idea, just placing the orchestra around a virtual conductor, but with a very, very beautiful and immersive image also to the back. So here is, can you hold the camera for one second? So I'm going to listen to this one. So I'm right now inside of the... I'm getting 3D audio. Correct. So it seems to be ready. So is it shipping? Yes. We have actually two levels to proceed in order to bring this technology to the market, to the consumer, to the everyday world. Are you able to get it already? Or are you talking to broadcasters? What are you doing? We have our customers at two-fold. It's primary broadcasters, and it's also manufacturers from the consumer world. Are they already shipping products with it or not yet? We actually, we are already having a product for professional broadcasters by Maya Communications, which is a black box, which allows broadcasters to go ultra-high definition. We on the other side have an app, which we download, which we are currently working upon, as a complementary, because it's always referring to a specific broadcaster, then actually you can have this experience. So we are, the ultra-high definition audio will only start in 2016. So we are already quite far advanced in the sense of that we are prepared for the market on the side of the broadcaster, who can verify that the technology is really performing as it promises. And on the other side, there is this app, which will be out exactly when ultra-high definition audio will start. Could this be added to all media players on Android, or something like just basically a new codec? It's a little app you download, and all of a sudden you have a normal... You play the video through that app. You have already your MPEG 4 codec on it, right? So you only need this little update, and all of a sudden you have this experience. All the media players on Android, or just that app to play back the video? We are working agnastically regarding base codecs. So it could be MPEG 4, it could be MPEG D, it could be actually HEVC with respect to video, it could be any format, it can be Dolby, it can be DTS. We don't want to make war. VP9. VP9. Yeah, or onto what it called. Oct is a beautiful example. All these codecs work for one simple reason, because we don't want to make war. We are actually outside the business of saying we have a proprietary codec, we go with it. We simply add this complement to any codec, might it be actually Dolby, might it be DTS, might it be MPEG. And this is our might it be org, which is an open source program. So this is awesome for headphones, how about speakers in the home? What's the advantage for that? The advantage is... You also get this experience? You can get it, but it's in a little bit different way. Just for imagine, for instance, this television set. It's far advanced, but the loudspeakers built in are poor, which means it's simply two stereo speakers. Now, the idea to bring 3D to the home is a little bit far advanced, and there are companies like Samsung Philips and others working on these solutions, and they already exist, I've seen them. For instance, you put full loudspeakers to the corners. Four. But these are not normal loudspeakers, these are dead dipoles. So they beam the sound directly to your ear, and they by means of crosstalk cancellation, which is a technology already known from the 70s, you can make that the beam of this loudspeaker only rise at this ear, and the other one at this ear. And you can even have a quite large spot where you can sit. And now you perform the same thing like here. You create a virtual headphone, which is sitting here. And you're simply sitting on the sofa, on the TV, somewhere, and you always hit by this 22.2 insert. How's the feeling of the sound compared to normal speakers? If you go to NHK, they show such a system. They already have this beaming system? They are right, but they do it in a little different way. They use a relatively integral system by a former MPEG expert, which is a colleague of mine. And so it's a technology which is already known, and this is future TV. Now, if you want me to summarize what Swiss Audic do in one word, we take normal HD, we put our little data inside, and all of a sudden you have an unlimited beautiful media experience wherever you go, with mobile and the home, wherever you wish to have natural sound. But if you don't have all these audio channels, if you don't record all these different audio channels, you kind of guess what it should be when you make this system. If you only have a normal stereo recording. Then you are actually here, if you look here. Don't believe that we do... This is not a stereo recording. They have many microphones, right? So this is 22.2, which means that internally, the standard is able to represent 24 channels. Inside are running 24 channels, imagine that. But here you don't have them. It's kind of like stereo sound plus metadata. So what's happening here is very simple. This has been bought in a store, this has been bought in a store. No, nothing else. We have an app on iOS. So what's happening here is, you put on the headphone here. But you can also use right now my head as a measurement device. I'm sitting inside a 3D audio lab. And there are mics right where my pinna are inside the ear. Now, each loudspeaker makes a sweep. Which is measured in the sense of room impulse in the frequency delay. And now you enter these 24 channels to such a convolution on the left ear and on the right ear. And this simply means that in the end you do this operation 48 times. And then all of a sudden you have 24 virtual loudspeakers on ordinary headphones. Here, this device is not able to perform these 48 convolutions. The load is too high. So SwissOdac has developed mathematical tricks where you are able to reduce the number of convolutions and still to get the sound experience.