 how we can use it on the internet, because it's not that easy. What I'm using, so from here, I'm actually going to create a player that is going to be this way, and actually download it and place it online, or they provide it by example to be very easy, by downloading it to the phone. And I think there's no problem. And okay, so here is my creation, I'll let you play a video. So finally, I'm going to talk about how to add it. Currently, the problem is when you have a young player, when you power out, you have to power off. And also, the whole thing, the main part is actually software, so I need to talk a little bit about software. So we, some people at this age, so the nutshell is the volume processor in the NDS is quite easy to execute. And by the way, the situation is that it runs at 1.5MHz, which is quite easy to do, one easy problem, it's a very nice problem. So basically, you can add 5MHz. So of the channels is to uphouse channels. How do you make it? From the square waves, a wide-channel, triangle channel, because I have a way. One voice channel is kind of a built-in thing, like a version sound. And also, there's a big empty channel problem. There are a lot more things. For example, there are still voices, various things are in these and in these parts. So I do a demo of how this sound works. So I have this, without going too much detail, let me just skeleton of how the channel works. At first, we have an AQ clock. AQ clock will go inside the sample for a timer divider. Timer divider is just a counter where you set the number N as an overclock value. So every clock, if it does increase the counter value, the good way is N equals to 0. Then at the same time, clock the next unit. The next unit is a sequencer unit. Sequencer unit is a sequence of steps. For example, here, there's an A-step. What is called an A-step of people? A-step, that's what each step is of for a value. So it's a pretty strong value. So when it goes from step 0 to step 7, it actually outputs a square wave. So this is one, one unit of a square wave. When the clock comes in, it goes into a square wave. And so this is 25% of the size of the tables. Within the AQ, the entity has several defined tables of sequences. So one of them is 12.5% of the different types of tables. So people can have the same type of set. So it creates a slightly different sound. So to simulate this in a software, actually it's very simple. When AQ is clocked first time, sourcing increased the time of value. If the time of value is more than n, set to 0, increase the step index. This is all we need to write in the software side. And there's another function called return the pulse channel output, which is just a sum of 3, set A goal is just to return the number. So each channel is actually organized like that. So if we want to do the whole simulation, first thing is if you define a sample frequency for audio, we use 44.1 kHz. And then we have the clock frequency, which is about 1.8 megahertz. If you divide this one, you get about 40 clocks. That means I need to clock AQ and CQ for 40 clocks, then I do one sample. Then I clock another 40 clocks and do another sample. So basically, when we have about 40 clocks, we just clock this to what we are looking for. Then after the clock that we have left, then we just do a sample for all. That's what I'm going to talk about. So this is just the case of how our simulation or our invulnerator works. Of course, if you do this way, it will work very well on our PC, but it will work very badly on our system because it's very, very slow. This look is quite kind of consuming. Actually, you need to do a lot of complicated work, especially the AQ clock. Why you do this? Actually, you need to realize the function so that it gives you 40 clocks, gives you the output of what is the situation after 40 clocks. So when we have all the samples, what we are doing is we need to do some buffering because we also need to measure the performance of the invulnerator. For example, if still our audio sample is 34.1 to the first, I said that each frame I need 1024 samples, this is because I need to send all the sample frames to our FFT unit later for the point of the spectrum. So 44.1 to the first 1024 samples, you can actually calculate, you need about 23 milliseconds. So you need to generate 1024 samples within 23 milliseconds. Otherwise, your program will be in the run and the sound will be broken. So this is the hard requirement. If you cannot achieve the right way to do it, use buffering. So as long as your buffer is long enough, you can have continuous audio streaming output. But at the same time, this gives me an opportunity to optimize my code so that the majority of the sample energy are within 23 milliseconds. Although I have maybe one or two samples because I need to read SD card or whatever, the time is a little bit longer, I still can utilize the buffer to overcome all the delays. So I spent about one and a half years writing the software. It's quite slow, but I still use a lot of other libraries, so I don't have to LBGL, which is a very good graphics library to do all the UI elements that you can play with the sound. And the FFT phase I used to do reading the SD card. On DSP lib, it helps to do the FFT on a fixed point data buffer. And also this very useful thing is called blip buff, which produce bandwidth-limited output series. What it does is because some of the channels, like the trigger wave or the channel wave, they produce where the trigger wave has an infinite amount of harmonics. When I sample it at 44.12 Hz, there's a lot of alien that will happen, and the type that will go back to the low frequency is, it sounds noisy. So the blip buff actually internally is doing a lot of filtering for you so that it guarantees the output bar is still in code within the sample rate that you have been assigned. So what I've learned, so this is stay at home do nothing, but do this project. But this is what I learned outside the projects. So why the CMake system? And this is the Raspberry Pi people SDK. It requires you to understand CMake, so I have to understand it as well as one little study. Event-driven hierarchical same machine pattern. This is very nice for embedded system if you are doing UI programming or you are handling different events, like the player, you need to handle a lot of often presses, like in your case, plug and plug, like the SD card insert, you get all these events. You need to handle this in every state, like you are playing or you are putting files, you all have to kind of handle these events. So this is a very good pattern. And they have like 40 over chapters or tutorials on YouTube, very nicely organized. So I learned some basic, I learned some six-point mathematics. And also one of the most important thing, if you are doing Raspberry Pi, the best place to ask questions is your forum. You will encounter the chip designer, you will encounter the SDK authors answer your questions, which is very, very helpful. So that's just my presentation. But maybe I want to take a slide, and I'll feel free to take some of your questions.