 Some sort of delay, some sort of delay, some sort of delay. Right, let me get myself down. That's a good sign. Jump straight into it first. It's a bit of a contradiction, isn't it? Hopefully, someone can join us live. If someone just shares a stream link to both Twitter, keep calling it Twitter, just to annoy some multimillionaire who don't actually use it, just post and run. Then we mainly use LinkedIn, if you want to connect on LinkedIn as well, just add me to your network. And in a live development session, it's been posted, we have some prompts and stuff. We've loaded the code into GPT-4, yes, the script does handle various functionalities like managing the audio context, lending, loading a piano instrument, patching the EG data, plotting the EG data on the chart, converting EG data to MIDI notes and playing the sequence. So, one more information on this one here, converting EG data to MIDI notes, yes, the collection processing. In your code, load data, frequency analysis, okay, so the EG data typically, in the form of time series voltage values, to convert this into frequencies, you might need to perform a frequency analysis like a Fourier transform, however, in your application, it looks like the EG data is already in a frequency like format, which I don't think is true scaling routing to nearest MIDI note, yeah, it's not in the frequency I know it, because when we actually use it to see which code is actually being invoked, essentially currently we're loading channel zero, and the notes, the frequency translation is just, it's essentially amplitude modulation, where the frequency value is going up, we've increased amplitude, which is a bit odd, but that's I guess just was a template code, because for example, if we just to illustrate how it works, let's just select channel seven, we listen to the notes generated, right, so you could see it's going the frequency, the note, the note frequency, so we're having a higher frequency note with increased amplitude, and that's quite obvious, so it don't need to be a musician to tell, to be able to tell that, so the issue is that this is saying that it's already converting into frequency, there's actually no FFT done, be sure, let's double check, yeah, we're using a linear interpolation, so where this thing is actually not being used, why not, there's some sort of, I'm just using interp from NP, which is NumPy, that function is actually the sub-Py interpolators not being used, so the problem is with this, aeg frequencies, let's search for them, it's already in aeg data, also data zero, aeg data is not the frequency, it's actually the time, let me explain, so last bof, github, co-pilot, where is it co-pilot, let me explain this, and GPT4 has all the code, so let's get to the same, let's read github co-pilot, yeah, variable, it doesn't have context, in the context of your project, at least dictionary, that structure could also function as a class, yeah, github co-pilot gives something super generic, not very useful, let's see what GPT4 got to say, now we're asking specifically, I guess we need to ask about the code, there is code, how co-letting aeg data to MIDI notes is, yeah, we're extracting aeg, we'll convert aeg to music function, if github co-pilot while GPT4 is generating, let's try to explain the whole code, just so it has the context, my assumption is that those frequencies are not actually frequencies, because it's not, I don't know where, I don't remember doing FFT or anything like this anyway, extracting aeg frequencies and scaling to MIDI range, convert aeg to music root in the FFT is responsible for this conversion, it first loads the aeg data using load underscore data, start, count, channel, which returns the aeg data for a specified segment, defined by start, count, and channel, the aeg data typically consists of frequency values, these frequency values are extracted from the loaded data, aeg underscore frequencies equals data, zero, for data in aeg underscore data, the extractive frequencies are then scaled to a midi range using the scale underscore aeg underscore underscore midi, aeg underscore frequencies function, okay the problem is those are not frequencies, those are amplitudes, I wonder what will do if I, if this with two question marks, I would ask the confusion in my previous explanation, in the context of aeg, electroencephalogram, data, the term frequency usually refers to the rate at which the electrical signals are oscillating, which is measured in hertz, HZ, however the raw data from an aeg typically consists of voltage values over time, not frequency values, these voltage values represent the electrical activity of the brain, okay so why are you taking, why are you treating them as frequencies, in your application the process of converting aeg data to midi notes seems to be based on these voltage values rather than on specific frequencies of brain waves, here's a corrected explanation of how the conversion is done, it's good it's corrected itself, might be tricky if you haven't used the aeg before extracting eeg voltages, the eeg data loaded by the load underscore data start count channel function is in a form of voltage values over time for different channels, the function convert underscore eeg extracts these voltage values from the eeg data, eeg underscore data equals load underscore data start count channel, yeah we're not scaling, let's read this again quickly yeah convert aeg extract voltage voltage values, okay then we're scaling voltage is valid to midi range, the function scale underscore eeg underscore underscore midi, eeg underscore frequencies, where eeg underscore frequencies is actually a misnomer and should be something like eeg underscore voltages, takes these voltage values am i, misnomer, okay so if it's a misnomer why did you generate this code to begin with, it scales these values to fit within a predefined range of midi node numbers, this scaling is done using linear interpolation with numpy's entail function, the voltage value is a map to a range of midi node numbers eeg 48 to 84 in your code corresponding to c3 to c6, that's the one this is wrong because we we want you to actually fix this, the resulting midi node numbers can then be used to generate music which is a creative way to represent the eeg data's patterns in a musical format, the actual musicality of the output depends on how these midi nodes are used for instance the duration rhythm and the instrument used for playback, yeah so if the function scale eeg to midi eeg frequencies where eeg frequencies is actually a misnomer it should be something like eeg voltages, shall we update the code absolutely, yeah but they don't just update the name so this function is taking eeg voltages and we know it works okay so for example if I take another channel any channel playing this sound, yeah so it is treating as you can obviously tell it is treating the amplitudes as frequency, try another channel and this is what we need to change just wondering if we should leave this as an option as an amplitude modulation um music option I mean first because it's already there and it kind of works generating music kind of based on the amplitude of the signal this is another channel, yeah so you can see these three more the same just kind of works it does something not very useful also expect the seizures in the middle of this file as well I have seizures so we'll have much higher variability but try clicking on that without it generating multiple times, yeah there's a problem with this code that when we scroll using the data position scroller it registers more than one value so that's another something we need to fix the github github copilot doesn't seem to get the context get the mistake we ask GPT-4 so we can ask if a load data function just ask an open ended question what load data function returns because in the code it assumes frequencies are being returned but it's actually amplitude so it needs to be corrected, yeah github copilot says load that function returns a list of lists each in a list represent a row of each data and contains two elements the AG reading from the specific channel and the corresponding time in seconds the function returns specified number of these rows starting from a specified start point the channel from which the AG is taken so the question is is it returning frequencies the answer should be no it does load data function return frequencies and it should be no even though the code assumes so no function does not return frequencies trying to EG data from the specific channel yes it's voltage values not frequencies the conversion to frequencies or any other form of data would need to be done separately so where is conversion to frequencies being performed well it's nowhere in this code in a separate function for this yeah it's not done in the code that's correct yeah taking what's called EG frequencies but actually representing voltages that's correct and scales these values to arrange this is not a conversion to be two frequencies in the typical sense if you want to convert the EG voltage to frequencies you would need to perform a Fourier transform or a similar operation which is not present and provide a code please note that the conversion from voltages to frequencies in EG data is complex process blah blah yeah of course it is we still need to do it instead of this prompt here where it's obviously just suggesting to correct the names we want to actually update the code to perform to add a conversion to frequencies hey how to add code and ask the co-pilot the same question so far i didn't have much luck with github co-pilot that's probably because i don't know how to use it that gives geared towards more experienced programmers right so it's writing some convert to frequency apply 50 as you it's using to know what's said different than r fft to fft might go with gpt more suggestion we can get up compartment to use gpt for anyway but for some reason gives you different functions connect the stager trust the chip the open AI interface some to stube to a some larger extent we need to get this correctly so might as well a read the whole lot data conversion from EEG voltage values to frequencies in your EEG to music application you first need to decide on the method for converting these voltages to frequencies one common approach is to analyze the EEG signal in terms of its frequency components using a Fourier transform such as the fast Fourier transform fft this method decomposes the signal at its constituent frequencies and is particularly useful in EEG analysis however it's important to note that converting EEG voltage values directly to musical notes frequencies are not always yield musically coherent results as EEG signals do not inherently align with musical scales the conversion might need creative adjustments to make the resulting music aesthetically pleasing here's the basic outline of how you can implement this one add a function to compute frequencies from EEG data you can add a function that uses fft to compute the dominant frequency in each segment of the EEG data this function will be called familiar convert underscore e root python copy code import numpy as np def get underscore dominant underscore frequencies e underscore data sampling underscore rate dominant underscore frequency equals the segment in it yeah sometimes it's reading the text to speech is actually reading the code sometimes it isn't but we get the gist the question is it's doing it's finding one dominant frequency for the whole segment is it have to check no it's doing frequencies so what's the segment size where is the segment coming from in your fflath root call this function to get the frequencies and then convert them to MIDI notes there's a question or amplitude between delta a bit deeper up to 40 hertz you want to match notes yeah I do want to match notes eventually we'll start with some sort of linear interpolation but eventually yeah we try and find some more creative way of a more creative transfer function so it's actually it sounds more like like music so at the moment at the moment the template just does a basic amplitude to frequency conversion so the higher the amplitude is the higher frequency note but yes but now we're turning it into frequency an actual frequency conversion so at the moment it just does yeah so that that's the template that the originally gave and now we will convert it to actual frequency I think one of the main problems will be what type of window size so the e g data should be the data displayed on the chat currently it's only the 16 16 data points default is 100 this data is very long so there's no shortage of data also this data sampling rate is 400 so I don't know if that how to use that information in when converting to music any suggestions are welcome so eventually this tool the idea of it is to have so you could scroll through pre-recorded e g choose your channels a window size and then potentially additional controls will be selecting your segment size so essentially or just have it fixed so for example each window will be divided into whatever 10 and then have 20 or 30 as your minimum window size so for every 30 data points you'll be getting half the notes the musical notes or yeah you need at least how many do you need to calculate fft minimum of well I would say probably like 10 or something so yeah if we're going by default so for every 100 data points we should we should generate about not more than 10 musical notes essentially when you load the page reload the page default goes to channel 0 100 data points there is zero position in the file there are seizures later so does it start playing no yeah there's an out buggy thing that if I actually use the dull position stroller it registered registers it as if I clicked twice or more than once that's repeating the same thing twice which you could do possibly with and different musical instruments or something so the other thing that other channels could then be different musical instruments so we'll need the 16 of them in this case we have a lift up these prompts for now just save the chat later any suggestions of how to do it are more than welcome essentially we want to hear what the seizures sound like and yes it's highly biased because you can essentially make them sound anything you like so we're adding this so we already have numpy snp so we have this dominant frequency question is where's this segment coming from we don't have it in the code we should have all our functions at the bottom we already have numpy snp loaded segment coming from segment let's just take in the ag data quite get it sweep this quickly in your flower grid call this function to get the frequencies and then convert them to midi notes so now in convert e g to one work with that e g and e g data right so this is e g voltages e g frequencies then we have this bit check triple check e g frequencies this is the sampling rate should be 400 section like 399.9 or something but yeah round it up it should be fine then we have midi numbers using e g frequencies and the returning conversion success for no meaning numbers yeah should this be should have a global should this be a global variable global right go to e g global what else modify scale underscore e g underscore to underscore midi to handle frequency data the scaling might need to be adjusted depending on the range of frequencies you're expecting from the e g sampling rate f s ensure that the sampling rate f s using the f f d matches the actual sampling rate of e g data we have made this a global variable so that's fine windowing the f f d is applied on segments or windows of data the sign on the side of these windows based on our application needs so what's different than segment and window it's only used in getting frequencies musical mapping the direct mapping of the e g frequencies to musical notes might not be straightforward you might need to creatively map these frequencies to a musical scale or use some form of quantization to fit them into a musical context yeah that would be a big thing wouldn't it performance considerations f f d and frequency announces can be computationally intensive especially for large data sets or real-time applications you might need to optimize your code for performance yeah currently I'm using I only have one note one musical font grand piano but yes I want to try others as well like bass guitar would be really nice for seizures I think is or I don't know if it will sound more like drums or something save this this wouldn't work would it to scale differently suggestion for scaling e g to midi so that would be the range of the notes I think to modify the scale underscore e g underscore underscore midi function to handle frequency data you'll need to consider how the range of e g derived frequencies maps onto the midi note range given that e g frequencies typically range from around 0.5 hertz to 70 hertz covering delta theta alpha beta and gamma waves you should decide how to map these frequencies to musical scale the other thing is a good map different instruments to different bands that kind of that makes more sense doesn't it a common method is to scale the frequencies over admittedly to match the way musical pitch is perceived here's an example of how you can modify the scale underscore e g underscore underscore midi function to handle frequency data right so that's the frequency to number conversion frequency below zero yeah obviously otherwise yeah we're already doing dead logarithmic scale scaling the scale function yeah so returning midi numbers that's okay a common method is to scale the frequencies over admittedly to match the way musical pitch is perceived here's an example of how you can modify the scale underscore e g underscore underscore midi function to handle frequency data frequency underscore underscore midi underscore number converts the frequency value to a midi number based on the formula for calculating the pitch of a musical note midi note 69 is standard a 4 440 hertz and each increase by one represents a semi-delaw scale underscore e g underscore underscore midi e g underscore frequencies converts each e g frequency to a midi number using frequency underscore down the score midi underscore number it also clamps the midi numbers to valid range 21 to 108 for a standard piano this approach assumes a direct mapping from frequency to pitch however e g data's musical transition might not be straightforward due to the non-musical nature of the e g frequencies you may need to experiment with different mappings or scales to achieve a musically coherent result additionally consider the implications of mapping very low or very high frequencies as they might fall outside range of human hearing or range of midi notes yes we need some sort of um for the last point there we need some sort of uh range uh calibration so the the extreme the edge edge cases let's try this one we have this function twice now this one is being used come on this out and let's try let's test this so we get just randomly select channel then get a 500 tuple indexes out of range now i think we want to see how large is that the midi numbers array how many notes are we getting to the conversion the error message tuple index out of range suggests that there's an issue in your convert underscore e g function where it's attempting to access an element of a tuple that doesn't exist this is often due to mismatched data structures or incorrect assumptions about the size or shape of the data being processed given that the error occurs in the convert underscore e g function we should examine the lines of code where tuple indices are used the most likely culprit is in the processing of the data returned by load underscore data start count channel here is in steps to troubleshoot this issue ensure that load underscore data is returning data in the format you expect print out the structural length of the data to verify it matches your expectations add validation or error handling to check the structure of the data returned by load underscore data for instance ensure that the data is not empty and each element of the list has the expected format now loading of the data should be fine because it was working before yes we could use a more debugging right the first five elements bg data make sure that the get underscore dominant underscore frequencies function or any summer function you're using to process the e g data handles the data correctly and aligns with the structure of e g underscore data normally cool it's solving this type of problems i don't know it made a wrong assumption about the structure of the data based on the sample data you provided from the convert underscore e g function it appears that e g underscore data is a list of lists where each unit contains two elements a voltage value and a timestamp the error tuple index out of range suggests that the code is expecting a different structure somewhere in the process the error might occur where the code tries to access elements from each inner list in e g underscore data if the code expects a different structure like a tuple with more elements this could cause tuple index out of range error giving you a data structure you should ensure that all accesses to e g underscore data are correctly indexing into these inner lists for example the line e g underscore voltages equals data zero for data in e g underscore data is correct for extracting the first element voltage value from each inner list ensure a similar correctness wherever e g underscore data is accessed the function get underscore dominant underscore frequencies or any function you use to analyze the frequency content of the e g data must handle the data structure correctly ensure the dysfunction expects and correctly processes a list of voltage values if it's designed to work with tuples or different data structures it needs to be adjusted this will help you verify that e g underscore voltages is correctly formatted before it's used in the frequency analysis make sure you're error handling in the convert underscore e g function is capturing the full trace back of the exception which can provide more context on where and why the error is occurring you can modify the error print statement include trace back information given that the e g voltage sample is correctly formatted this seems the issue lies further down in the convert underscore e g function possibly in the get underscore dominant underscore frequencies function or its subsequent use the error message tuple index out of range usually occurs when trying to access an element in a tuple or list that doesn't exist so the problem likely lies in how the data is being accessed or processed in these functions ensure that this function correctly handles a list of voltage values if this function expects a different data structure or if it's manipulating the data assuming it's a different type like a tuple with more elements that could cause the issue double check how indices are accessed within this function if there's any place where it's assumed that the input is a tuple or has more elements than it actually does that could be where the error is occurring add print statements inside the get underscore dominant underscore frequencies function to understand how the data is being processed and to check if and where it fails yeah e g to midi function it's the get dominant frequencies fs is correct so how does this work what will be the value of segment let's do that quickly only some proper python developments screaming at me right now it's a problem that segment is now get out this works in reaching get dominant frequencies function now that you know what the problem is to regenerate the get underscore dominant underscore frequencies function for your e e g to music class application we were focused on extracting the dominant frequency from each segment of e e g data we will assume each segment is a list of voltage values and we will use the fast Fourier transform ffd analyze these values and find the dominant frequency this function will then return a list of dominant frequencies one for each segment okay analyzes the e g data to find the dominant frequency in each segment e g underscore data list list of e g data segments each segment is a list of voltage values what we currently have doesn't work so well get rid of it already have not bias then be segment size this function expects each segment of e g underscore data to be a list of voltage values if your e g data is structured differently you might need to adjust the function accordingly so you know what the e g data is structured like and those segments in the data can explain this for a segment in e g data that no segments in the data e g underscore data structure the e g underscore data is expected to be a list where each element is a segment of the e g data a segment in this context represents a series of e g voltage readings over a specific time interval for example if your e g data is sampled at 400 hertz and you're analyzing one second segments each segment would contain 400 voltage values yeah but is it one second segments looping over segments the for looper traits over each of these segments in each iteration the variable segment is assigned to the current segment of the e g data being processed within the loop each segment a list of voltage values is processed individually the typical processing involves applying a fast Fourier transform ffd that this segment analyzes frequency content the ffd transforms the time domain data voltage values over time in the frequency domain data showing the different frequencies present in the segment and their respective amplitudes quickly takes too long segment is a one number obviously it doesn't work wondering why it doesn't understand the context of why it doesn't work and how to solve it why did you suggest this code to begin with considering it was working based on the output you've provided it seems the error might be arising from a misunderstanding of the structure of the data being processed in a get underscore dominant underscore frequencies function yeah that's right the output segment 70 point o inches suggests that each segment being processed in the function is actually a single voltage value rather than an array or list of voltage values which is what the ffd function would require this misunderstanding can arise if the e g underscore data being passed to get underscore dominant underscore frequencies is a flat list of voltage values whereas the function expects each segment to be a list of voltage values over a time interval ensure the get underscore dominant underscore frequencies is designed to handle the structure of e g underscore data correctly if e g underscore data is just a flat list of voltage readings you need to modify how the data is segmented before applying the ffd right if the e g data is not already divided into segments you need to divide it into appropriate segments before applying the ffd yeah for instance if e g underscore data is a list of 100 voltage readings and you want to analyze it in segments of 10 readings each you'll need to create 10 segments each containing 10 readings okay let's do this here's an example of how you might modify the code to handle this this makes sense segment sizes there so we're dividing each window into 10 segments e g data is what loaded on each screen we'll check it in a sec it's just turning it into a number array isn't it yeah compared to number array we'll take that common there look through each segment okay that's clear why is it in quite get the more segmented e g that okay e g underscore data is divided into smaller segments each of size segment underscore size so interior should have 10 nodes coming out of this if it works it's still out of range segmented still 70 write the whole yeah write the whole function it's not able to integrate it itself right so just have the one follow up makes sense it's kind of easy to tell as someone else the bot have done this yeah I might continue next time see you in a bit