 Can you write an overview of the whole code? It's loading for, writing for so long. And JGPT became slower, much slower, clear. Yeah, I'm using this voice wave JGPT voice control thing. So you wanna say, yep, it should send a message when it's clear, it should clear the text window clear. And it actually works most of the time. If I speak properly into the microphone, clear. Okay, now it doesn't, clear, clear, clear. Okay, this is still generating forever. So that's what the application currently looks like. It's loading some EEG data from file. Can check it out in EEG noise removal spectrum on the page. This is the same data has a seizure in it. That's the quiet after the storm, after the seizure. You can scroll through it in the EEG. It's on IEG.org. I think anyone can download, I think it just requires a basic registration. This particular data set has seizures in it. That's what they look like. It's really EEG. I know sometimes things like this look a bit fakeish, but no, those are seizure spikes. Check them in here. Try and reduce noise in it, which this tool does that using wavelet transforms. But now we're trying to turn this into music for various reasons. Well, one possible use case is being able to review EEG just by listening to it. And you don't want to be listening to sinus noise because it's annoying, noisy. It will just sound like noise. So you might as well turn it into music. Well, when I say music, it might not be perceived as music, but instead of sinus, it's actually using musical notes. So notes like this, it's just for testing. So actual F sharp six or G six at this code, which GPT-4 generated for us will actually do a conversion using an interpolation. That's the general idea. I'm getting all sorts of errors with it. Many to the fact that the frequency components are not been directly translated into a musical note because it will be essentially outside the range of a typical piano, for example, now we're using a grand piano. So that interpolation, well, it does work by the generating this. The algorithm cannot find these particular notes. I'm not sure if it's because they're not defined here. We're having this AVA error as well, which is meant to prevent ascending the same thing multiple times. So let's solve this one first. That's the error in description. Actually don't have the function clear, clear. This doesn't work. Clear, right, it did work. It's a hit and miss. It must be my pronunciation, that the bot is not picking up. So are we trying to define our own debounce function that we have a debounce available somewhere already? And in the use of the function, we have debounce update data, convert to music in the 500 millisecond delay. That looks good. Also, data slider, okay, that error is gone. I'm still getting the buffer note. So the notes would not play. Yeah, so the idea is that this EG, real EG will be translated into musical notes. Let's reduce the window size for a sec. So as you scroll through the EG file, it will convert it to, well, music. I'm not saying it will sound like music, but it will be converted to actual musical font called Grand Piano. So it will be converted into something like this. But we're having all this errors coming through. Audio context was not allowed to start. We'll try to resolve it both by using GitHub co-pilot and GPD4, modern browsers, security. Yeah, I like how GitHub co-pilot already gives a resolution. Yeah, GPD4 does the same. And more. This is some sort of browser feature. They generate the update that was it main, main JS. Yeah, we need to handle an audio context because I'm getting this warning that the audio context is not allowed to start. Unless there is an interaction with the page, let's make the window size smaller again. Try this again, the main problem is that, well, we can't hear those notes. That's obviously because of this comment here. The path of a note is not found. Yes, I thought we're not using the web media anymore because that was generating a file. Right, so it sounds like we won't be able to bypass security of modern browsers and we kind of don't want to anyway. So we'll have to implement this. This at least should get rid of this audio context warning thing. The music is still running. It's reduced the window size. We should already play some notes. It's not doing it. I can't hear anything and get the following. Yes, it's Grand Piano. Well, I mean, that's what's called Grand Piano. It should have a wider range of notes. I'm pretty sure it's working okay because we have this button for testing is actually playing some. We need some troubleshooting. It's doing a 1D linear interpolation. So we have a window size of 10, selecting another region of the EG. The notes are B5, it's generated twice. It's playing okay. This audio context thing is not being used correctly. You regenerate all AGS. Yes, this is updated. JavaScript loading Piano, loading this Grand Piano. So it should have all the keys that error that we were getting should go away. Fetching EG, updating the data that's happening when you're scrolling, plotting the data, converting to music, using another function. It should be legit. And the bounce function should prevent the user from clicking too many times. You know, the piano is working, the EG is working, it's loading okay, but we're still having this issue with the buff not found. Avoid re-initializing the piano or the audio context multiple times. Okay, how do I actually do that? And that this if condition window piano buff is not, so that message is actually coming from the goal. So it's good to know, play next note, document get, and then buy it D, it should be the same. Document event list, load it, load Piano, debounce. Now still skipping and the piano is still working. We have B5, I need that prompt. That's a comment, try co-pilot as well. So doing explain this, so it has the access of that prompt into co-pilot as well. The issue might be that the audio data for the note B5 has not been loaded when you are trying to play it. It's not finished loading, but ensuring the piano sound is fully loaded before trying to play note. But that code is actually working, that's what I said in a window piano play B5 is working okay. Yeah, get up co-pilot, doesn't have the context. Yeah, GPT-4 is not giving an answer, but at least it understands what I'm trying to do. Yeah, let's do some more troubleshooting. Yeah, I'm going in circles, it does have C6. So now we're doing B5, it's to C6. Yeah, I'll load it. Now the sound phone is okay, loading is okay. To restructure this code. Should be playing this tones. Yeah, let's start a new chat quickly. Do we have the prompt at the bottom as well? Just continue. Yeah, I have notes working and work okay. I'm going to continue with debugging. Yeah, that's probably the best advice. Probably going to get penalized for long prompts. Probably going to get a timeout. It is simply an issue of how we actually using this JavaScript for playing sounds. Try to understand what's the difference between the mean JS and the regular JS. So this is using a million note numbers. Okay, we also got a timeout from GPT-4. As expected, there's no option to regenerate though. You can install SoundFont player via NBM or included directly in your HTML. Loading an instrument, JavaScript copycode SoundFont. Instrument, AC, Clavinet, then Function, Clavinet.