 managing the audio context, loading a piano instrument, patching the EG data, plotting the EG data, on the chart converting the EG data to MIDI notes and playing the sequence. Yeah, so it is treating, as you can obviously tell, it is treating the amplitudes as frequency. Try another channel, and this is what we need to change. Just wondering if we should leave this as an option, as an amplitude modulation music option. I mean, first, because it's already there, and it kind of works generating music, kind of based on the amplitude of the signal. This is another channel. Yes, we can see these three more, the same. Just kind of works, it does something, not very useful. Also expect the seizures. In the middle of this file as well, I have scissors, so we'll have much higher variability. But try clicking on that, without generating multiple times. Yeah, there's a problem with this code that when we scroll using the data position scroller, it registers more than one value. So that's another something we need to fix. The GitHub copilot doesn't seem to get the context, get the mistake. We ask GPT-4. So we can ask if a load data function, just ask an open-ended question, what load data function returns? Because in the code, it assumes frequencies are being returned, but it's actually amplitude, so it needs to be corrected. Yeah, GitHub copilot says, load data function returns a list of lists, each list represents a row of each data, and contains two elements, the each reading from the specific channel, and the corresponding time in seconds, the function returns a specified number of these rows, starting from a specified start point, the channel from which the eG is taken. So the question is, is it returning frequencies? The answer should be no. It does load data function with the frequencies, and it should be no, even though the code assumes so. No, the function does not return frequencies, it returns the eG data from the specific channel. Yes, it's voltage values, not frequencies, the conversion to frequencies or any other form of data would need to be done separately. So where is conversion to frequencies being performed? Well, it's nowhere in this code, and a separate function for this. Yeah, it's not done in the code, that's correct. Taking what's called eG frequencies, but actually representing voltages, that's correct, and scales these values to a range. This is not a conversion to frequencies in the typical sense. If you want to convert the eG voltage to frequencies, you would need to perform a Fourier transform or a similar operation, which is not present and provide a code. Please note that the conversion from voltages to frequencies in eG data is complex process, blah, blah, blah. Yeah, of course it is. We still need to do it. Instead of this prompt here, where it's obviously just suggesting to correct the names, we want to actually update the code to perform conversion to frequencies. So far, I didn't have much luck with github-copalment, that's probably because I don't know how to use it to get to what's more experienced programmers. Right, so it's writing some convert to frequency live 50. I assume it's using, to know what's said different than the R, FFT to FFT, might go with GPT or suggestion, I mean github-copalment to use GPT-4 anyway, but for some reason it gives you different functions. Connect the stage, I trust the open AI interface to some larger extent. We need to get this correctly, so might as well. It's doing, it's finding one dominant frequency for the whole segment, is it? Have to check. What's doing frequencies? So what's the segment size? Where is segment coming from? There's a question for amplitude between delta, bit deeper, up to 40 hertz, we want to match nodes. Yeah, I do want to match nodes eventually, we'll start with some sort of linear interpolation, but eventually, so at the moment the template just does basic amplitude to frequency conversion. So the higher the amplitude is, the higher frequency node, but yes, but now we're turning it into a frequency, an actual frequency conversion, so at the moment it just does. Yeah, so that's the template that originally gave, and now we will convert it to actual frequency. I think one of the main problems will be what type of window size, so the EEG data should be the data displayed on the chart, currently it's only the 16 data points. Default is 100, this data is very long, so there's no shortage of data. Also this data sampling rate is 400, so I don't know if that, how to use that information when converting to music, any suggestions are welcome. So eventually this tool, the idea of it is to have, so you could scroll through pre-recorded EEG, choose your channels, a window size, and then potentially additional controls will be selecting your segment size, so essentially I'll just have it fixed, so for example, each window will be divided into whatever 10, and then have 20 or 30 as your minimum window size, so for every 30 data points you'll be getting half the notes, the musical notes, or yeah, you need at least, how many do you need to calculate FFT, minimum of, well, I would say probably like 10 or something. So yeah, if we're going by default, so for every 100 data points we should generate about not more than 10 musical notes, essentially when you load the page, reload the page, default goes to channel zero, 100 data points, zero position in the file. There are seizures later, so does it start playing? No, yeah, there's another buggy thing that if I actually use the data position scroller, it registers it as if I clicked twice or more than once, and it's repeating the same thing twice, which you could do possibly with and different musical instruments or something. So the other thing that other channels could then be different musical instruments, so we'll need the 16 of them in this case. So we have a, well leave these prompts for now, just save the chat later. Any suggestions of how to do it are more than welcome. Essentially we wanna hear what the seizures sound like, and yes, it's highly biased because you can essentially make them sound anything you like. So we're adding this, so we already have numpy snp, so we have this dominant frequency. The question is, where's this segment coming from? We don't have it in the code, we should have all our functions at the bottom. We already have numpy snp loaded. This segment coming from segment, we're just taking the eGit data, quite good. So now in convet eGit, one work will convet eGit, have eGit data. Right, so this is eG voltages, not eGit frequencies. Then we have this bit, start check, triple check, eGit frequencies. This is the sampling rate. Should be 400, section like 399.9 or something, but yeah, round it up, it should be fine. Then we have median numbers using eGit frequencies and the returning conversion success for median numbers. Yeah, should this be, should have global, should this be a global variable? Make this global, right, convet eGit global, what else? Sampling rate, FS, ensure that the sampling rate, FS, using the FFD matches the actual sampling rate of your eGit data. Yeah, we'll make this a global variable, so it's fine. Winding, the FFD is applied on segments or windows for data. So what's different than segment and window? Yeah, segment, it's only used in getting frequencies. You might need to creatively map these frequencies to a musical scale or use some form of quantization to fit them into a musical context. Yeah, that would be a big thing, wouldn't it? You might need to optimize your code for performance. Yeah, currently I'm using, I only have one musical font around the piano, but yes, I wanna try others as well, like bass guitar would be really nice for seizures, I think, as well, I don't know if it'll sound more like drums or something. Save this, this wouldn't work, would it? So do we need to scale differently? I need suggestions for scaling eGit to MIDI. So that would be the range of the notes, I think. To modify the scale, underscore eGit, underscore, underscore MIDI, function to handle frequency data, you'll need to consider how the range of EEG derived frequencies maps onto the MIDI note range. Given that EEG frequencies typically range from around 0.5 hertz to 70 hertz, covering delta, theta, alpha, beta, and gamma waves, you should decide how to map these frequencies to a musical scale. The other thing is we could map different instruments to different bands, that kinda makes more sense, doesn't it? A common method is to scale the frequencies over admittedly to match the way musical pitch is perceived. Here's an example of how you can modify the scale, underscore eGit, underscore, underscore MIDI, function to handle frequency data. Right, so that's the frequency to number, conversion, yeah, frequency below zero, yeah, obviously, otherwise, you're already doing that logarithmic scale. Now with scaling, it's the scale function. Yeah, so returning MIDI numbers, that's okay. A common method is to scale the frequencies over admittedly to match the way musical pitch is perceived. Frequency underscore to underscore MIDI underscore number converts the frequency value to a MIDI number based on the formula for calculating the pitch of a musical note. MIDI note 69 is standard A4, 440 hertz, and each increase by one represents a semi-delup. Scale underscore eGit, underscore MIDI, eGit, underscore frequencies, converts each eGit frequency to a MIDI number using frequency underscore, underscore MIDI, underscore number. It also clamps the MIDI numbers to a valid range, 21 to 108 for a standard piano. This approach assumes a direct mapping from frequency to pitch. However, eGit data's musical transition might not be straightforward due to the non-musical nature of the eGit frequencies. You may need to experiment with different mappings or scales to achieve a musically coherent result. Additionally, consider the implications of mapping very low or very high frequencies, as they might fall outside the range of human hearing or the range of MIDI note. Yes, we need some sort of... For the last point there, we need some sort of range, a calibration. We have this function twice now. It's where it's being used. Forming this out. Let's try, let's test this, see what we get.