 Anyway, let's get back to what we were doing. Currently, G to music we spend quite a lot of time on it, but without much progress, it was generating something. It was generating noise, it wasn't very musical. So I thought we'll give it a go this time. We'll start from, not from scratch as previously. So previously we did learn quite a bit about music generation, MIDI, music, fonts, things like that. You can see previous videos. The idea here is to take an existing tool. This is already on the website. You can go try this out. Right, this is the coordinate function. X4 parameter is logout, high cut, assembly frequency, and order is order five by default. The function is very calculated. Nyquist frequency, which is half of the sampling frequencies, and then normalizes the logout and high frequency by dividing them into Nyquist frequency. The battle function from side-by-signal modules then used to calculate the coefficients of the Butterworth filter. We get the coefficients in A and B, and the next function, butter, band pass filter, is actually using those coefficients. Yeah, it has the same inputs, then it's actually generating B and A, and using L filter, one dimension, IIR or FIR, which one is it by default? Show this will be parameters B array, array like. Okay, it's not actually telling us which one is it. Is it IIR or FIR? Show it's IIR. That's just the guess. Yes, GPT-4 has a better, more generic explanation, which is better for purposes. Filter data is returned. Yeah, it didn't actually explain the whole code, did it? It's good GPT-4 says it's a good script considering it wrote it itself with the whole spectrum. We have the delta, theta, alpha, beta. Seem to work okay. This is during seizure. It's actually in the error at the edge there. And this is during non-seizure that has that weird thing in it. Yeah, this is the end of the seizure. So there's seizure, there's no seizure. It's a tricky business because the spectrum is actually similar, but it's similar because of different reasons. They have to look at the power in it. So yes, when I improve this tool, but we also want to generate music out of it. So we don't have to review EEG manually like this. Let me just hit the play button and listen to it. Hopefully it's not too annoying to listen to. Need to write prompt for it. This tool is working well. It's fast and responsive. We would like to add an option to play musical notes when we are scrolling through the file. The notes could be retrieved out of the power in each frequency component and potentially played as chords. So multiple notes together. Let's say it's integrating a feature to play musical notes based on the power of frequency components in EEG data. Yeah, okay. This would turn that data into auditor experience, blah, blah, blah. Yes, high level data mapping, frequency to note mapping, okay, and scaling and the power of each frequency could control the volume or dynamics of the corresponding note. Note, chord generation based on selected frequencies from chords. This can be done by grouping notes that harmonize well together. Well, that's the help from a musician here. Implement logic to decide how chords change over time. Audio synthesis. You want to play it back in the... Well, using an audio API to play the chords. Integrating with existing application. Add a play button. No, using a play button. Yeah, we don't want to do that. Yeah, question is, do we just want to play frequencies or actually musical notes, which we tried doing previously? So in the code, we already should have the power for each EEG frequency component, delta, theta, alpha, beta, when scrolling through the file. Can we use this to generate musical notes? It's kind of the same thing again, but it's not actually doing it. So if we assign a specific note or a range of note to each EEG frequency band, it doesn't mean that we will only have four notes to actually start generating a JavaScript using web audio API. Yeah, this one's using sinusoid. Should we do this in front end or back end? Mainly, hop off at this stage because we're obviously running the server for other things as well, and we want to do as much as possible on the front end. So yes, the performance will highly depend on the device you are using to access the web application. Probably go for some sort of hybrid, well, obviously for hybrid approach. Question is, what's the ratio between back end and front end processing? Question when we normalize, it's normalizing within the certain window, so the time window there is important for replacing all this map data to notes. We're having errors, so we have in JavaScript, line 142, kind of the cheat that it to replace this with the data value. I'm feeling this should be happening elsewhere. Can GPT or help with this? There are the current seconds. So it's a pointer, fetch data and render probably should split this function. Existing code fetched plot data. Then when we have fetch API URL, response json data, that's the code for plotting the signal. We have a constant of notes to play. Yes, we already have play frequency and map EEG data to notes. Keep adding more and more functions, extract each data for music. And the main one is fetch data and render, which we are replacing. What your context was not allowed to start. And another function is not defined at 69 note range. We have note range in forget function. Right, this is actually similar to what we had in the previous code. I have feeling we need to change the back end as well. We have power to note. That's just the warning, it's not an actual error. And that's fine, it's the info. Audio context was not allowed to start. Is that why we can't hear anything? The JavaScript 129 in a webkit may not exist. Okay, actually don't want the button. I'm getting this error, which doesn't seem to be related to the button. I want an extra button. We don't particularly care about TypeScript. That error is okay, but the browser is not playing anything at the moment. We also do not want to add an additional button. It is not needed. Can we make sure the sound is played when the scroller is moved? This edge, a bit to take the risk of annoying the user. So we have audio context, initialize false, initialize audio context, then play frequency. That's no problem. We can get rid of that. And let's use that. That should initialize the audio context. And in this scroll event, edge and render. Yeah, we don't need all that. It keeps adding more functions. It's ridiculous. I always say, okay, we already have that. Like policies preventing us from playing in the browser. Every time I ask for something, I'm generating a new function. Which is not great. Yeah, we'll try fixing this next time. I'll see you in a bit. Bye.