 Hi, welcome to CS 2019. My name is Roberto Senino and I am a research manager of ST Macelectronics. This system is about high-quality voice recording exploiting bone conduction. That is a transmission of audio based on vibration of the skull. When we talk, our skull vibrates and we can record such vibration into an audio signal using an accelerometer designed to work on vibrations rather than inclinations. So here we see our system setup. We have a board using microphones and accelerometers in order to record audio being reproduced by this mock-up. So when I turn on the system, there will be a voice speaking through the panel and this voice is generating vibration in this panel. The vibrations are recorded by the accelerometer and the audio is recorded by the onboard microphone. In this PC GUI, you see the individual path of each of the sensors, which is the microphone on the top you see in time and frequency and then the accelerometer in time and frequency. What we are going to do in order to deploy this system is we are going to take the lower portion of the spectrum up to 2 kHz from the accelerometer and the higher portion of the spectrum from the microphone, as you can see in these two diagrams. We are going to combine them into a single audio signal where acoustic gains of the two sensors have been equalized and recombine them into the output that gets generated as output of the system. Let's try to record the audio coming from the demo. So we turn on this recording system where we see two channels actually. One represents the audio as recorded by the single microphone. Another one represents the audio recorded by a combination of microphone and accelerometer. So let's try to listen to each one of these recordings in order to understand the improvement in quality. I'm going to split the two signals into individual tracks so that we can listen back to them. Now, if I turn on this bottom track and I play it back from the loudspeaker, you can hear a very disturbed audio that represents what you would get using just a microphone with no further processing. If we listen to the other track, we can hear the output of the actual system, which is clear voice with no of the noise you were hearing previously. How can this work? Why did it happen? Because the accelerometer is sensitive to the vibration of the skull, which are generated only by my voice. The accelerometer portion of the signal is totally insensitive to audio coming from the environment. When we recombine accelerometer and microphone, we get this level of clarity by the physical nature of the signal itself. And this gives the result. It must be outlined that this accelerometer has a very linear behavior in frequency and a TDM interface that allows easy combination with the microphone. The computational power involved at CPU level is much lower in this system than in traditional noise reduction systems. So, thank you for listening to this demo. For further information, please refer to www.st.com.