 So let's have a look on the hands-on now. So what we will do. First, I would like to show you some tools to debug the signal processing, and actually how do we do. So what we will need for this. We will need our discovery board. We will need the microphones, which are embedded here. So we will use these two MEMS microphones, which are about 2.1 centimeters apart. And this will be our audio source. Then we were speaking about the necessity to have audio, well, anti-aliasing filter. This is embedded in the codec. So there is hardware codec just on the next close to the jack input. Then the signal is digitized and coming to the F7 through SAI interface. So then we get the real data. And we will process it and send it back to the audio output, which is also the codec. And you can listen the processed data or process audio on the headphones. And that's all we need from the hardware side. But from the software side, how to work with the DSP or how to debug it. If you are using any of the currently mostly used IDs running ARM devices, you can't really visualize the signals. In most of them, what you can get in Maximum is the memory window, where you see very nice numbers like this. And it doesn't tell you really what your signal looked like. So first of all, when you want to evaluate or something, you need to see it. So how to see the signal, how to debug it, and how to simulate the DSP functionalities before you start implementing it into the processor. We will cover this right now. So we started with the fact that actually you are not able to read this. And most of the IDs doesn't offer any means of visualization. And if they do, they don't have any subsequent software to make the DSP simulations. So the only option usually is to export this data and to use any other tool to process it. So something what we want to achieve is like this. So to see instead of this memory array, to see a sine wave or whatever is inside the signal. And then you can do some evaluation, simulation on this signal, because you want usually to simulate the real signals, not some artificial sign which you can generate like this. But the real signal which you get out of your microphones to see what happens if you do filtering, if you do FFT, if you do whatever. So for this, usually you use DSP processing software like MATLAB, which is kind of first class software, or SILAP Octave, which are mostly the clones, usually for free open source community. Or just for basic evaluation or visualization, you can use a spreadsheet software or any other software which can visualize raw data. For our workshop purposes, we have chosen the SILAP, which is a very nice tool in terms of viewpoints. First of all, it doesn't cost anything. It's a completely open source. Then you can download it here, but we have provided it in the package. It is quite easy to install, 120 megabytes. And it has very good online help, which is also available offline in the installation. And the installation take just a few minutes. So if you haven't done it yet, please install it now. It is in the PC Tools SILAP EXE. And it depends if you have 64-bit or 32-bit variant of the OS. Please select the correct one. And once installing, be sure you check this installation without internet connection. It shall be faster than it. It's not trying to connect all the time to get updates. So please go through the installation now, because we will use it in the first hands-on. So what we will need to export the signal? Well, maybe first of all, we should consider that usually the DSP is working on the buffers. So in our audio example, we are sampling, I think, 512 samples of the audio signal inside the internal memory. Then we go then interrupt. And we do the processing over this buffer of 512 samples. This is quite a common approach. And usually use the double buffering scheme. So you acquire one buffer while you are processing the other buffer, and then the buffer swap. So usually what you want to have is the picture of one buffer. It's usually quite enough to evaluate the signal. But it depends on the signal frequency and the signal properties. So in our case, we will let the code running, which is running in an endless loop, waiting for the interrupt when the buffer is full. Then we can put a breakpoint, and we will be sure that at that time we have one buffer filled in in our memory. Then we can export it from IR and load it to the silo. Exporting from any tool is usually possible through hex file, binary files, and so on. Or CSV, for example. In IR, which we are using today, we have prepared a set of macros, which will ease our life a little bit. Because normally you can do memory save manually, but it will take some time to do it every time you make a new acquisition. So we have prepared a macro, which is called My Memory Save, and shall be in the macro quick launch in our example. When you double-click the blue arrow, you will get outputted some code, or some data. We will acquire this data into Silo. And at the first point, we should get a picture like this. But for sure not with the sine wave, but the real wave, which is coming from our microphones. So this will be our starting point to do any processing. Because first, we need to know what is in our buffer, what we have acquired. So we can leave it on the background, and we can start with the IR project, which is this one. It's 0.6 DSP Audio. And please, you can open it now. In this project, we will basically not change anything. Everything is done already. And we will just change some configuration, or some flex, more or less. This project is a little bit more complex. So I will first introduce you a little bit the main while loop, what is happening inside. So we can analyze a little bit the main loop, just the main points. After the initialization, we enable some things, like cache, for example. Then we enable the LCD to show also the signal on the LCD as a side effect. And then we just start our board support package with audio in and out streams. So starting from this point, I'm getting audio stream from the microphones to my RAM by the DMAs. And also, I'm sending out the same RAM content to the headphones. So it's running completely on the background via the DMA to the serial audio interface. Then I have some initialization of the filtering functions, but we don't care right now. And this is my processing loop. In my never-ending loop, I just ask if my buffer is filled. OK, this variable is set up in an interrupt of the DMA. So every 512 samples, I will get this variable set up. And I'm doing double buffering. So here I decide which buffer I'm going to process, if it's buffer 1 or buffer 2. And then because the codec is sending the data in the form of left and right channel multiplexed in time, I would like to have two arrays, one with the completely left channel and right array with only the right samples. So this is what I do here. I just create two arrays with the length of 512. So every 512 samples I got interrupt, then I got this flex set. I decide which buffer I'm going to treat right now. And I do left and right. And starting from this, I can do what I want. So let's stop here. And we just want to see what is in these buffers. So what are our data? Then you can compile and run and debug by this button. So if you succeed to compile and run, you should get an application like this, something on the screen which is showing the time domain of the audio signal on the screen. OK, having the signal on the screen is quite sexy, but you can't really evaluate it because it's changing quite fast. So at some point you want to stop and you want to evaluate the signal in a more, let's say more professional way. So that's why we have this line in the code. If it's running, you can put a break point here at line 229. And once the software is running, you can put a break point. On ARM, you can put a break point while the software is running. And once you have stepped here, we are sure that we are in the processing or the hardware is stopped at the process of full buffer. For sure you can stop anytime, but then you can have half buffer full and half buffer empty. So it's better to put a break point at this point when we are sure we got the full buffer without any troubles. So in the watch window, you should have already pre-selected the left channel. So you see my left channel is full of numbers. But that's basically all you can guess. There is some numbers inside. So now we would like to export this data and import to some software which can visualize it. So for this, we have prepared the macros in the macro quick launch. And if you double click this memory save, it will effectively take the left and right buffers and save it as a hex files. Short hex files, each buffer has one file. These files are saved on your project location. Once you did this, you should have the left hex and right hex generated next to your project, okay? The size should be about three kilobytes for each buffer. To launch the macro, you need to double click to the arrow directly, okay? If there is a result zero, it went fine. Okay, so we've got generated, we have exported the data. So the last step is just to import it to the silo. And for this, if you double click, there is a file silo process one, which is a script file for silo. And normally if you double click it, it should open the silo. So you should get the editor of the script files inside the silo. Here, if you press F5, it will execute the script. And this is the result of what you should get, something like this, with the audio signal which was around your board at the time when you hit the break point. So I guess you recognize it's very copy-paste MATLAB clone, okay? So you've got, for those who haven't worked with this kind of software, you can usually work with this in the console mode. So you type the commands you want to do, or much better is that you have a script where you put the same commands, but you can generate more commands at the same time. And then you can run as a batch file. So very briefly what is in this silo process one inside? First of all, I define some variables like length of my buffer, speed of sound and sampling frequency, which is 16 kilohertz in our case. And then the main thing is that I open my left and right files and I read it to left and right variables inside this software. And all I do is I just plot them in one screen, okay? And I put some legend around. So very easily by this software, you are able to load the data from external files and you are able to visualize it in quite a nice way. So in this figure, we have both channels left and right, but you see they are mostly the same because the microphones are quite close to each other and they are acquiring the same audio signal. Right now we are able to actually communicate between the STM32 and our simulation software. So first of all, I'm able to visualize my signal. This is the first starting point you always need to perform. And then now if you want to do any algorithm with the CMC's library or whatever software in the STM32, you probably want to check if the algorithm is working properly, okay? How to check this is usually that you use such a software like MATLAB, CELAP or whatever to work with the same signal, pass it through the same algorithm in the PC which has infinite length of the world almost. You can use whatever algorithm you would like with the power of the PC. Then you take the result and you can compare with the result what you get in the STM32 or in DSP in any micro. If you get the same result, you are lucky, you are happy, everything works well. We have succeeded to make the first part just importing the signals into our device.