 Still, on this EEG2 music conversion, can I get stuck with this one? It works by itself, it's a bit too slow, I don't know how to make it faster. I have to use GPT some more to resolve some of the things. It's sending too many requests when scrolling, which is a bit annoying. The welcome that is EEG2 music should be pretty straightforward. Maybe not, depends how you look at it. And this becomes a bit too slow. Yeah, it's the seizure there. Check your CPUs and performance mode. That should define the problem within ages. Age conditions on that thing essentially do not work. Now, the other thing is we also have this test HTML. It just takes a frequency and generates a sine wave. Sounds like that, sorry if it was too loud. Essentially what we want to do now is combine the two. So we want to extract the frequency from here for a certain time window and translate it into a sinusoid and keep playing it. Yeah, we also want something interactive. So when you change the position within the file or window size, the algorithm will change as well. So it feels like you are generating music. So this code is working fine, playing sine wave in a browser. Now, instead of having this frequency component, the frequency value will extract it from the EEG first and then they play the sound as we go. We actually get rid of this button. So it's more interactive. Don't need the button. So how to do it? EEG is from an application. Yeah, it's this one here. It's also not greatly responsive. I want something more responsive and hopefully it will appear on the side sometime soon. Soonish. I'm thinking should we use this one? Yeah, I'm thinking starting from scratch. No, it's before we start from scratch and get very complicated, too complicated. I might actually go for this one and remove the spectrogram and replace it with audio generation. How does that sound? Halfway, third way. So first one will just continue from where we left off. Second way is taking this and restructuring, removing the spectrogram and adding audio generation options. The third way will be starting off going with this code here, an actual sound generator. Didn't realize how simple this was because before we were trying to use MIDI, Web MIDI, with its JavaScript, but apparently you don't need it. Also the structure of this thing became rather too complicated. Question if this one can be made a bit faster as well. I don't know, there's this lag when I actually click on the screen to when the chat is changing. Yeah, they're changing in a different manner as well. The spectrogram is more just a bit more responsive. Actually maybe not. Yeah, now I'm not clicking anything, it's still going. Why do I have 7% dropped frames? This is actually happening when I'm using the tool. So that's not cool. Yeah, because it's affecting my live stream. I'm doing the live stream of the same server. Yeah, you can see more frames are being dropped. It's not good, too much data. I thought this data meant to be loaded. I have to change this to actually not take effect unless you release the mouse. So if you scroll through that like crazy, it would not affect anything. Yeah, similar to those. This EGJS should be fast scroller files. That's not cool. Double check is the same thing. Fast scroller had event listener. How is it not the same as that one? Don't want to just get rid of it, just in case. It's actually needed. I can comment that you can run it locally using this app pie. It's much quicker on a local server. It's kind of obvious, but why the one is not doing anything? Don't get it. That's all. That's working. Let's see what happens to it. It's not working. It's not working. They need both of them. Why? Now that sound should have been gone. Sorry for that. I should have been doing everything on a local server. It's not affecting my internet connection. Okay, I literally don't know why we have those twice. Each one of them has its own problems. We need to then mix and match certain things for it to work better. Anyway, we don't need all the media stuff. That's for sure. We need to make all the applications lighter. I don't use too many resources. That's the test code that's working. Okay, that's the one generating. That sounded a specific frequency. Can we add the frequency slider to the code? Should we start another application? Let's try this quickly. It feels like one of those old school student projects. Can I open it? I want this one again. Probably one that converts the eG to music just to generate the frequency components. Change to continuous play without the button. Essentially, the frequency to change when the slider is changing but the tone being constantly playing. Should we press straightforward the ideas? Yeah, we don't want any buttons. It's generating forever. Save this one for a sec. Many hertz can be actually heard. So it's a good thing. I can't hear it. No, it's still there. Let's try that with those geoclosers for now. Okay, that should have stopped. Right now from this frequency. Yeah, this musical vision. It can probably go. Yeah, should do it. One extra frequency is in JavaScript. Okay, this is for reference only. Don't generate any codes. Okay, now we have LHS. Let's get that. It's not far. It's having trouble with my microphone. Let's work here. Let me continue. It's only a regenerate option. To implement continuous tone generation, you would initialize and start an oscillator outside of any event and only change its frequency in response to slider input events. Ensure that your UI elements, like sliders and buttons, have the correct IDs and event listeners attached to them. If the goal is to play a continuous tone that changes with the EEG data input, instead of integrating the tone generation with the EEG data fetching functionality, dynamically adjusting the tone based on the incoming EEG data. This script provides a solid foundation for a web application that visualizes EEG data and interacts with MIDI devices. Depending on your exact requirements, you may need to adjust the interaction buttons and data handling to suit your application's goals. In the RECS DAF, I actually need that slider that the frequency meant to be changing depending on the frequency of the EEG. Is there a slider or the frequency is actually changing based on the EEG signal? Process EEG data. Analyze or process the EEG data to determine how it should affect the tone's frequency. This might involve calculating averages, detecting peaks, or applying other signal processing techniques. Continuous tone generation. Like in the slide example, initialize and start an oscillator outside of any event listeners that it plays continuously. Real-time data handling. If the EEG data is streamed in real-time, you need to establish a mechanism, like web sockets, for continuous data transmission. Data processing. The way you interpret EEG data to control frequency can vary greatly depending on your application's goals and the nature of the EEG signals. User experience. Continuously changing tones can be disorienting or uncomfortable for users. Ensure that there are controls to start, or adjust the audio output. This approach assumes a more complex integration between EEG data and audio output, requiring a good understanding of both EEG signal processing and audio synthesis. Yes, there are a couple of things. One is that we... If we have a small window size, we can then be generating the sounds by moving the scroller through the file. This is a longer segment. We want to break it into segments regenerating multiple tones consecutively. Just like the second option is probably a better one. At this stage, it is not happening for us today. We might continue next time, tomorrow. See ya, bye.