 Here in the waveform, an analog signal has been sampled at a different timing. The places it has been sampled are the sampling points. And the data which has been captured means converted from analog to digital or the sample to data. And how many times it's been sampled is the samples per second. So these are important things to convert back from the digital to analog. These informations are required. So here it is single channel. The same thing can be a two-channel called stereo and more than 2.5.1 channel or 7.1 channel keeps going. But the concept is same. Let's move on to the audio subsystem in an SOC. The audio subsystem compresses of following blocks, I2S interface, audio subsystem clock, DMA, I2C and codec. Here we see the typical audio block in an SOC. Here in the SOC, it's nothing but application processor. This is for mobile SOC I'm talking about. So here we have an I2S controller which receives the data from the memory through DMA. And these I2S controller gives the data through I2S interface to the mixer. The mixer again transfers the data through the audio codec again through the I2S interface. There's an I2C controller which is required to control the audio codec through I2C bus. We will see each and every item in detail in the coming slides. I2S uses three different types of data formats. Any one of the format will be used between master and slave. The master and slave must use the same data format when it is transmitting through I2S. So here we have three different types of formats. One is I2S format, left-justified format and right-justified format. In the left-justified format, LR clock when it is low, whatever the data is considered as a left channel data. Whatever the data when it is in the right side is called right channel data. In left-justified format, the data when the left channel LR clock is high is considered as a left channel data. And when LR clock is low, it's called right channel data. And also the data will start immediately after when the inner clock goes from low to high. In the right-justified format, the last bit will end along with the left channel data when it from high to low. So this is the difference in I2S data formats. Clock. Clock is very important block for audio. Because what are the clock is being configured based on that all other clocks like LR clock are derived. So the LR clock is nothing but the frequency of LR clock is nothing but the sampling rate. So if the data is sampled at 48 kW, then the frequency of LR clock also should be at 48 kW. So the data, the clock should be configured based on the sampling rate. So every time the sampling rate changes, the clock also need to be changed. Or the divider should be modified such a way that you get a proper left-right clock. So the clock can be generated from the I2S block or it can be generated from the product side. I2S meaning I2S block in the application process side or it can be generated from the product side. If the application processor generates the clock and pass it to the product, then application and processor will be the master. Similarly, if the product generates the clock and gives to the application processor I2S block, then product will be the master. DMA. Normally three DMA channels are used in mobile SOC. So there are one DMA channel for transferring the TX primary data and one channel to receive data from the I2S. And one more channel is used as a secondary data transmission for the I2S. So there will be two I2S controllers. One is for primary data transfer and another one is for secondary data transfer. I2C interface for codec. Audio codec chip is controlled through I2C interface because audio codec is sitting outside the application processor. So to control the audio codec, we have to send the command through I2C bus. So codec driver, when we write a driver for the codec, we use RegMap to access those register via I2C. I2C layers. So how audio codec driver communicates through its codec through I2C? Audio codec driver is a client driver. So here you can see the client is that audio codec driver. That will communicate to the audio I2C driver and it will send or receive the data through I2C. That I2C driver will communicate to the I2C hardware and I2C hardware is nothing but an adapter hardware you can see here in the hardware side. That will communicate through I2C bus to the I2C devices. Here the bottom two I2C devices, one could be a audio coding software architecture of the audio driver. Here we can see how is an application layer apart from that remaining are of belong to the kernel side. So also library, also core are belong to the also framework and the below part is part of the sound card driver. So sound card driver inside we have a machine driver, I2C driver, mixer driver and codec driver. Sound card driver is act as a glue layer. Basically it creates a link between the I2C mixer and codec. So I2C is directly connected to the application processor. A mixer has a multiple interfaces. It has the interface from application processor or it has interface from modem. It has interface from Bluetooth device. So these it takes a multiple I2C inputs and it passes on to the different users. Again, it will go back to the modem. It will go back to the codec and it will go back to the application processor also. So these all these links are registered by the sound card driver to the also core. So also core will directly communicate to the sound card driver. And from there it will communicate to the respective drivers. So sound card driver is the one which actually registers as a all these links and creates a one sound card. So based on the sound card driver only the sound card name is registered to the kernel. So we have also have the DMA driver and the LPAS driver. So DMA driver is nothing but is linked to the I2S. So I2S will bind the DMA channels to its I2S streams. And LPAS is the one actually it is main for low power audio subsystem. It will handle the low power different state of the audio low power states. So for example, when the system wants to go to sleep mode and when it is a non sleep mode, how it has to configure the clocks all are done in the low power audio subsystem driver. Data flow in audio subsystem. So here application is calling PCM write to the ALSA layer. So also layer will copy the data is passed from the application to the DMA buffers. So this is a CPU copy. So there is a copy happens from application buffer to the DMA buffers from DMA buffer to the I2S lines. The ADMA transfers the data based on the need. So this is how the data from the application reaches to the I2S interface. So once it reaches to the I2S interface, then the data is transferred through I2S lines to the mixer from mixer. Again, it is transferred through I2S lines to the codec control flow in audio driver. So here we can see. Cal, which is an application layer which calls ALSA library through system cons. So also library will call the ALSA core and from ALSA core it will call the sound card driver because sound card driver is registered with the ALSA. So when the sound card driver is called, then it will call the respective I2S mixer and codec. Based on the functionality software implementation view of audio driver. So here you can see like each hardware interface is treated as a separate entity. Meaning if there is an interface between the application processor and codec that is a separate link and that is called digital audio interface. So the application processor wanted to play some sound. Then it will use the DAI1 to transfer the data to the codec. Similarly, when during the call, the communication processor, I think, but modem will use the link to DAI2. And similarly for FM radio, it will use DAI3 to transfer the data to the codec. So each device has a separate functionality. For example, PCM-C0-D0-C or PCM-C0-D0-P. So C0 is a card and D0 is a device and P for playback and C for capture. So when you say it for the DAI1, it will be PCM-C0-D0. DAI2, it will be PCM-C0-D1. So like that it will keep going. So after pro, a separate PCM device is registered for each DAI link. This happens in the SoundCloud driver. This slide shows the sequence diagram of playback and capture. So when the application is not high enough, once you write playback or record, first it will call PCM open to open the device. So when the device is opened, it will check whether it supports all the parameters or not by calling the info function, then the hardware params. So here the hardware params, it will pass on the parameters like sampling rate, bits per sample. These information will be passed. Once these information is supported by the SoundCloud driver, then it will give the success. If it is not supported, then it will fail. So the PCM open will fail if that particular format is not supported. If the format is supported, then the PCM open will be successful. Once the PCM open is successful, then the hire will call for the PCM write or read based on the need basis. So write is nothing but playback and read is nothing but recording with capture. For the audio playback, the hire will call the PCM write. So when the PCM write is called by the call to the ALSA library, ALSA library will in turn will call the PCM prepare. PCM prepare will actually prepare the DMA buffers. Once the DMA buffers are prepared, then it will call the write frames. So write frames will be called and the data from the HAL will be written to the DMA buffers. So then the DMA buffers will be triggered again by the ALSA. So the DMA trigger function will get called to the I2S. So I2S, once the trigger is getting called, so DMA will start. So once the DMA start, the data will go from I2S buffer to the DMA buffers to the I2S controller. So once I2S controller receives the data, it will pump out the data through the I2S interface. So this will be kept in the loop until complete buffer is played back. Sequence capture. So the sequence capture is nothing but recording. So when HAL wants to record the data, it will call the PCM read. So when the PCM read is called, sound card driver will call the start function. So when the start function is get called, so it will start receiving the data from the I2S. So I2S in turn here, when the capture is getting called, the respective mic and other functions will be called in the codec set. So codec will be initialized and it will start transmitting the data through the I2S lines. So once the I2S lines from the codec been transferred, data is getting transferred through I2S lines from codec, it will reach to the I2S controller in the application processor side. So application processor will read the data and it will pump the data to the application. So once the data is all received, then application will end by calling the PCM close. This slide shows the sequence diagram of mixer operation. So application which is nothing but HAL when it calls a mixer open. So also library will open the mixer driver. So it will get all the list of the controls which is provided by the codec driver and mixer driver. So it already has all the details of what are the controls are being available. So when the mixer calls get value of particular control, it will go and read the registers of codec and mixer and provides the required detail back to the application. Similarly, when it want to write some controls, it will also go and write particular settings and it will configure that value. So to see how the mixer works, you can see in the next slide. This slide shows the block diagram of mixer. Mixer is a hardware device, mixes the digital audio data from different sources and sends back them based on the selection. So here you can see there is a lot of interfaces here. One is from AP and one is from codec and the one is from modem and other one is Bluetooth. So these TX data goes to the mixer and get mixed and these output are given back to the same devices. So mixer output can be given based on the selection. So there are switches available in front of the RX side. So based on the switch selection, it will receive the particular data. So if you see the internal of the mixer, you can see again, from all the four inputs, these are goes to the adder. So before to the adder, there is a switch. So whichever the source we wanted to add, those only sources will be connected. So these controls are directly controlled by the application. So application can control the switches through the driver. So let's see how it happens. Controls in mixer. Mixer driver exports some controls to the upper layer through K controls. These controls are used to change the SFR of the mixer, which is done through the regimen. So for application, K control get K control and it will pass the control value. So which is the control. KCTL is what control it has to make it and control is the value it wanted to proceed. So here is that example. So Hal will call the mixer control get value. If you wanted to get the value, then I'll celebrate. It will reach to the I celebrate. I celebrate will call the respective control read function. So this will go to a mixer driver. Mixer driver will call the register read. Then through it will have a list mixer driver will have a list and it will check whether the particular read function is there or not. If it is a read function is there, it will go and get read from the mixer register and it will get the value. So that will be passed back to the hand layer DAPM controls. So DAPM controls are the controls being provided from the codec and mixer to control various blocks. So for example, when you wanted to play back, you can play back through speaker or headphone or through earpiece. So this path is basically the how the output has to reach. So this can be enabled separately by enabling the particular device. So speaker on will enable only the speaker function. Headphone on will enable only the headphone function like that. So similarly for the capturing purpose, a mixer mic one can be enabled or mic two can be enabled separately. So apart from this, there are some gain controls. So when you wanted to set the particularly digital gain for DAC or ADC or in the mic, you wanted to unlock gain, you wanted to increase the gain for mic one and mic two or analog gain for speaker, earphone, earpiece or headphone can be configured. Also mixer path can be selected. So mic input data mixing can be selected through the ADC mixer controls. So these are mainly towards the codec side controls. So here the use case, we are going to see the one use case of how mobile uses the audio driver. Check the use case. Let's go into the Android audio system. So Android audio system is nothing but how audio framework is there in the Android, which uses the audio kernel. So when you see the top most layer of an Android is application layer where application layer will have multiple applications running. So it can be a phone application or a media player or recorder applications will be running on that. That will be done and called the application framework. So application framework is another matter media player application framework or recorder and audio manager and phone manager. These things will be there in the application framework. So application framework will call the native framework functions, which is nothing but specific to the audio functionalities. It will have audio track audio recorder and audio finger audio mixer and audio policy services. So the audio finger is one of the main thing which actually takes converts all the data to the PCM and the PCM data is passed to the hand layer. So hand layer will call the audio hardware interface through and it will ask the audio policy manager. So audio hardware interface is nothing but it which is actually going to communicate with the kernel. So this will call the kernel functions. So we have already seen how the hall is communicating with the kernel multiple times. So we will move on to the use case. And we see the example of voice call path. So when modem is directing the incoming call, it will inform the application processor. So once application processor directs that means which is a Android is running right. So Android will direct. There is a call in coming call. Then it will initiate to play the ringtone in the speakers. So through call will call the playback of audio for speaker. So it will enable the speaker and then it will play the sound through the speaker. So once we receive the sound, then we answer the call. So once the call is answered, the call will be established and the user can communicate to the other user. So when the user speaks. So here before the once the call is answered, the path need to be changed. So path will be changed from application process to the modem side. So previously when application processor was playing the ringtone from application process to the product. Now the path will change from between modem to codec. So application processor link will be disconnected and directly codec will communicate with the modem. So who are the users? What are the things he's speaking? It will go to the codec and it will reach the modem. Similarly from the other side, whatever things the other users speaking that will come through modem and it will go to the speaker or the earpiece. So based upon the connection, if there is a earpiece is connected, then it will go to the earpiece. If he wanted to play in the speaker, he can play in speaker. Or if the user connected to the headphone, then it will go to the headphone. In case the user isn't using Bluetooth, then the audio will go to the Bluetooth. We have reached to the end of this presentation. Let me summarize what we have seen so far. First we saw the audio subsystem. In audio subsystem, we have i2s block, mixer and codec. So i2s block actually takes the data from the memory and transfers to mixer through i2s interface. From mixer, the data gets transferred to the codec. And codec is a playback. It converts digital data to analog and plays in the speaker or headphone based on the configuration. Similarly, for recording, the codec receives the analog data from the mic and transfers to the mixer. From mixer, it goes to the application processor or the modem based on the use case. Next, we saw how the audio software architecture is placed. So we have the hall layer in the top. Basically, that is the application layer. So it starts from the kernel side. It starts from the alza library, then alza code. From that alza code, it comes to the sound card driver. So we have the i2s driver and codec driver and mixer driver. These are clubbed by the sound card driver. The sound card driver links the interfaces provided by each block and it registers the link. So the application can open the device based on the link name. And it will, based on the use case, it will open each device. So the links are created like ap2, codec, modem2, codec, Bluetooth, ap or Bluetooth, modem. So these are the links created. And whenever, based on the use case, these particular links will be open and the audio will be played or recorded. This entire presentation is based on a sound card driver using a mobile device. So this sound card driver may change from device to device. Here, we have taken example like i2c is used to control the codec. So different interfaces can be used to control the codec also in the different devices. And the i2c interface is used to transfer the data from application processor to mixer, mixer to codec. So different interfaces can be used there as well. And the mixer was placed in the application processor. It can be placed in the codec also. So based on the configurations, the driver will change. But what are the links provided to the upper layer? The ALSA will help to use only the link. So the upper layer, no need to know what kind of a bottom layer it is. I hope this session helps you to understand the basics of audio driver using mobile device. So that's all from my side. If you have any questions, please take a raise now. And thanks for attending this session. Thank you. Thank you for attending this session.