 Welcome to the session of digital sound and recording learning outcome of the session. Before proceeding, let me ask you one question. What is the example of the digital sound? You may pause the video, think about the question, write down your answer in your notebook and resume the video to see the answer. Let me give you the answer. The best example of the digital sound is the MP3 audio. So what is a sound? A sound is a pressure wave which is created by vibrating the object. These vibrations set the particles in the surrounding medium which is typically air in the vibrational motion, thus transmitting the energy through the medium. As you can see from the diagram, as the object vibrates the particles surrounding the medium will start vibrating and thus the energy is transmitted through the medium. Since the particles are moving in the parallel direction to the wave, the sound wave is referred as a longitudinal wave. The result of the longitudinal wave is a creation of compression and rarefaction within the air. A speaker works by moving its diaphragm in and out. This causes the air particles to bunch together forming the waves. If then the waves of the sound particle collide with your eardrum, vibrating it and sending the message to your brain, this is how you hear the sound. As you can see from the diagram, the diaphragm is moving in and out because of that the air particles get vibrated, creating a rarefication and compression and because of that the sound wave is transmitted through the air medium. The particles move back and forth about the equilibrium position thus creating the alternating zone of compression and rarefaction. In the rarefaction pulse, the pressure is below the atmospheric pressure. In case of the compression pulse, the pressure is above the atmospheric pressure. The figure indicate a one complete wavelength, which is created because of the one compression and one rarefaction pulse. So, this is the start of the cycle and this is the end of the cycle. As the pressure increases, the amplitude of the waveform get increases, which is called as a compression pulse. As the pressure decreases, which is called as a rarefaction because of that the amplitude start decreasing. At the normal air pressure, there is a silence. The frequency of the sound represents the number of periods in the second and measured in hertz or cycles per second. 1 kilohertz is used to indicate thousands of oscillations per second. So, 1 kilohertz equal to 1000 hertz. A sound also has an amplitude, which the property referred as a loudness. The amplitude of the sound is a major of displacement of the air pressure wave from its mean position. What is a digital sound? Digital sound is actual representation of the sound stored in the form of the samples. Digital represents the amplitude, that is, the loudness of the sound at a discrete point in the time. The diagram indicates the difference between analog and discrete digital sound. The green curve represents the analog waveform, which is referred as original sound signal. Whereas, the blue curve represents a digital sampling of the actual analog sound. What is a sampling? The sampling is a periodic measurement of the analog signal and changes the continuous time signal into the discrete time signal. As from the diagram, we can see a continuous analog time signal by sampling process converted into the discrete digital signal. After the sampling, the signal value is known only at the discrete points in the times, which is called as a sampling instance. Therefore, the continuous time curve can be described by the sample values. What is a sampling rate? A sampling rate referred as a number of times per second that sound is measured during the recording process. Higher the sampling rate increases the quality of the recording, but requires more storage space. As from diagram, we can see that as the sampling rate increases, we have detailed information about the signal. Thus, by taking the more samples per second at the higher sample rate, we can gather more real information about the signal. We have to use less guesswork. We can build far more accurate version of the analog signal and absolutely the end result will be, of course, we have the better sound quality. What are the sampled values? The values of the function at the sampling points is nothing but a sampled values. Sampling Interval The time that separate the sampling points that is called as an interval in between the two samples is nothing but the sampling interval. If the signal is slowly varying, then the fewer samples per second will be needed than if the signal is varying rapidly. So, the optimum sampling rate depends on the maximum frequency component which are present in the signal. So, from the diagram, we can see that the dotted line represents the sampling instant. The T s that is the interval in between the two sampling instant. A Nyquist Theorem As per the sampling theorem, which is also called as Nyquist Theorem, a band limited signal can be reconstructed exactly if it is sampled at the rate which is twice as the maximum frequency component which are present in it. That is, the frequency f s should be greater than 2 b, where the f s represents the sampling frequency and b represents the bandwidth of the audio signal. So, from diagram we can see that. So, first diagram indicate the sampled at the same frequency rate. Let us suppose that the waveform having the frequency of 1 hertz. So, for one complete cycle, we have only one sampling instant. But in this case, we had two sampling instant. From the picture, it is clear that we can reconstruct the signal from this information, but we cannot reconstruct the signal from this information. That is why the Nyquist Theorem suggests that we should use the sampling frequency which is twice as compared to the baseband frequency. That means the recording frequency up to the 20 kilohertz will require the sampling rate of 40 kilohertz which is called as a CD audio. The each sample value is rounded off to the closest numerical value which is called as a quantization. In a quantization process, the information in the accurate signal value is lost because of the rounding of the values. So, the original signal cannot be reproduced exactly anymore. So, we need the more quantization steps for the better performance. So, for the binary coding, the number of quantum levels are given by q should be equal to at least 2 raise to n, where the q is equal to number of quantum levels and the n is equal to the length in the bits of the binary code words that describe the sample values. The difference between the quantized amplitude and the actual amplitude is called as a quantization error. The more the quantization steps, we will have a lower quantization error. The coding process converts the discrete amplitude signal into the series of binary bits for the transmission and the storage. From the diagram, we can see that this is the analog signal. So, these points indicate the sampling instance. So, for this amplitude, we have one binary code. For this amplitude, we have one binary code. So, this amplitude can be rounded off to this nearby value 2. So, according to that, the amplitude space is evenly divided into the 6 quantum steps. Thus, 3-bit binary code can be used in order to code that. Each quantization amplitude is coded into the binary bit. For the first sample, the quantized amplitude is 0 and the coded bits are 1 0 0. For the second sample in the figure, the quantized amplitude is 2 and the coded bits are 0 1 0. For the sound recording, to digitally record the sound, the sample of the sound waves are collected at the periodic interval and stored as a numerical data in an audio file. So, from the figure, we can see that the audio signal coming from the microphone is analog. It is given to the analog to digital converter in order to convert that analog signal into the digital discrete domain. So, these digital values are given to the digital signal processor that will process the information for adding the effect, removing the noise and then these bits are stored into the storage devices. In order to retry the signal, the bits from the storage devices are given to the digital to analog converter. The analog signal is amplified, filtered and given to the speaker. Sound waves are sampled many times per second by the analog to digital converter. Then processed by the digital signal processor, digital to analog converter, transform the digital signal into the analog sound wave. So, how the analog to digital converter works? The analog audio signal enters into the analog to digital converter. Then it is sampled as a regular interval of the time. The voltage measurements are then converted into the binary numbers. And finally, these binary numbers are stored into the recording medium. Now, how the digital to analog conversion works? The binary bits which are stored into the medium are given to the digital to analog conversion. Then the binary numbers are converted into the voltage steps. Then these voltage steps are filtered, smoothened out and given to the audio signal. There are different digital audio formats are available such as AAC which is called as advanced audio coding. The extension is dot AAC or dot m4p or dot mp4. Advantage is that it has a very good sound quality based on the mpeg4. And it provides the lossy compression. It is generally used for iTunes music. A disadvantage of this extension is that the files can be copy protected so that the limited access is provided mp3 which is called as mpeg1 layer 3. The extension is dot mp3. The advantage is that it provides the good sound quality, it provides the lossy compression and can be streamed over the web. A disadvantage is that might require a standalone player or the browser plug-in, ogg vorbis format. The extension is dot ogg. The advantage is that it is a free open standard and provide the lossy compression and it is supported by the some browsers. A disadvantage is that slow to catch on as a popular standard and part of the Google Web M format. Next audio format is the web file. The extension is dot web. The advantage is that it provides the good sound quality and supported in the browser without plug-in. The disadvantage is that is the audio data is stored in raw format. So it needs a very large storage space. WMA that is Windows Media Audio Format. The extension is dot WMA. Advantage is that it provides the lossy or lossless compression and it has a very good sound quality. It is generally used for several music download sites. But the disadvantage is that files can be copy protected. So it requires add-on player for some devices. FLAC that is free lossless audio compression format. The extension is dot FLAC. The advantage is that excellent sound quality and provide the lossless compression. But the disadvantage is that open source format support is supported by the many devices. These are the references for the session. Thank you.