 Welcome to this video. Today, we are going to discuss the basics of acoustic imaging and the uses and limitations of beamforming for our acoustics. In previous lessons, we learned about the use of single microphones, which can measure the sound pressure at one location. However, they are difficult to use in environments with high background noise, and they cannot separate different sound sources. In case we use several microphones simultaneously in an array, we can sample the sound field at several locations. This allows for sound visualization and the separation of sound sources, as well as a large reduction of the effects of background noise. However, these devices also have limitations, as we will discuss later. The reason why we need microphone arrays in our acoustics is because aerospace noise sources, such as aircraft, are typically very complicated and emit sound in different ways and in different locations. As we mentioned before, microphone arrays allow for acoustic imaging to isolate these sound sources. Let's discuss the basics of beamforming. In mind that we have one monopole sound source, we define a scan grid of potential sound sources, since we do not know the location of the sound source a priori. Then the microphones of our array will record the sound signal of the source with different time delays. Using these time delays in a smart way, we can produce a beamforming output, which shows a maximum at the location of the sound source. If we consider a grid point without a sound source, we obtain a much lower value. We can already see that the end result has some limitations, like the side loops, spurious sources, and the beamwidth of the main loop, which limits our resolution. Imagine now that we have an array with N microphones. We built an N by 1 vector P, containing the Fourier transforms of the pressures of each microphone at the given frequency F. With this vector, we can define the cross-spectral matrix C, which contains the experimental information. We calculate it as the ensemble average over time. We then define a scan grid of potential sound sources. Each grid point has a position vector, and we assume a sound propagation model, normally a monopole, but not necessarily. For each grid point, we calculate the expected signals that will be recorded by each microphone, if there was actually a real source on that location. We do so by using the so-called Serum vectors, which are basically Green's functions. Here, I is the binary unit, F is the frequency, and delta T is the time delay between each grid point and each microphone, and C is the speed of sound. This is just one of the many formulations for the Serum vector, but many more can be found in literature. Finally, we assess the match between the model pressures by the Serum vectors and the actual recorded signals by the microphones. We do so by using the conventional beamforming formula, which gives us the source auto-power for each grid point. As you can see here, these are cross-spectral metrics, and everything else depends on the propagation model we chose. Therefore, beamforming can be seen as an exhaustive search for all grid points, which provides a source map as an end result. As an example, imagine that we have a single sound source on our scan grid. A typical beamforming source map will look like this. You can see that this method has some limitations, as we said before. First of all, there are some side loops or spurious sources that could be confused with actual sources. This can be improved by using densely populated arrays. Secondly, the main loops being with limits the spatial resolution, basically the minimum distance at which two sound sources can be separated. This can be improved, on the other hand, by using large aperture arrays. Therefore, we need a compromised solution. For a given setup and number of microphones, we can optimize the positions of the microphones within the array in order to obtain the best results. In this example, we have 64 microphones rearranged for a lower side loop level and a better spatial resolution. Another advantage of beamforming is that incoherent background noise, such as wind noise or turbulent boundary layer noise, can be eliminated. Since these noises mostly contribute to the main diagonal of the cross-spectral matrix, we can improve the results by removing this diagonal. This is especially useful for closed section wind tunnels, where the microphones normally measure hydrodynamic pressure fluctuations of the wind tunnel boundary layer. In this example, we can see how the results improve when we remove the diagonal. Another consideration for beamforming in wind tunnels is the convection of the sound due to the presence of wind. Imagine an airfoil emitting trilling edge noise inside of a wind tunnel. If we do not take into account the moving medium, the source map is shifted in the streamwise direction from the correct position. However, if we use a stream vector formulation that takes into account the Mach number of the flow, we obtain the correct position on sound levels. Another way to improve our results when using microphone arrays is to use advanced acoustic imaging methods. In this example, we are going to use the well-known method CleanSC. This technique starts with the source map of conventional beamforming. It localizes its peak value and calculates a point source that would generate that peak value in that location. It then subtracts the contribution of that sound source from the source map and repeats the process iteratively by removing the following stronger sources and cleaning the source map. This method uses the fact that side loops are especially coherent with the main loop. In this slide, we can see an example of the potential of CleanSC. Here, we have a conventional beamforming source map of an aircraft model in a closed section wind tunnel. By using CleanSC, we obtain much, much better results with virtually no side loops. We can even discover new sound sources along the leading edge that were hidden by side loop before. Many other advanced methods exist in literature, mostly tailored for specific applications. But a detailed explanation of all of them is beyond the scope of this lesson. If you want to obtain more information on these topics discussed in this lesson, here are some recommended references. This concludes our video. I hope you enjoyed it, and see you on the next slide.