 still struggling with us, we're still doing the music eG2 music conversion. We're trying to use both GitHub Copilot and GPT-4 because of that object. We are checking the data, so this is what we have so far. We actually started from scratch with this eG2 music application. This is, well, because the previous thing just didn't work. This web app is something we already had in the past. Now we're trying to add the music feature on top of it, so we are, well, starting from scratch by using an existing application. Yes, I'm still testing the sound and everything. I have to keep talking, so because there's about five, six second delay between what comes out of the stream kit that does work, and I don't need to listen to myself anymore. I don't find to mess around with the cameras or the USB ports. The sound should just be there. So we are trying to map this eG2 music. We have this recording that's a seizure, isn't it? That's a seizure. That's after seizure, seizure onset, and those work they are available on the slide. You can go check out by nikeos.com to check out previous versions of this tool. But in the link in the chat, we try to add a feature which will play music when you scroll through this file. The current conversion does not work. So we have a bunch of function extracting eG.4 music should be extracting the different frequency bands. Yeah, this function would not work because we don't have spectrums in the data. Check what we are getting this here. No, this is end, right? We show it should be end copilot displaying up. Okay, explain the whole line. So it needs more context. Yeah, those are end, end operator. Surprisingly, those should be working. Okay, now let's check if this function is actually returning anything. I think it does. So we're logging in the console. But then it's an object. Let's say object need the separate components. How do we correct this? Is it 100 and then save ggp4 understands the context of which is console in the case that the agent has an object and console is playing its default string representation, right to view. Yeah, it does understand the context. Yeah, let's do this. The first one is okay. Actually, it should show the data. Okay, we have the data. So that's one is working the nodes to play the same thing in a comma there. We have undefined all nodes to play undefined try both github copilot and ggp4, which meant the same, but surprisingly, they give different responses. So the office they are not the same. That you see indicates that nodes play right tense, four elements, all which are undefined. Here's a few possible reasons, the g data for music, object, data for music, it's the other function is that yes, the output you're seeing in the console, let's play for undefined undefined undefined indicates that the array nodes play has four elements, that each of them is undefined. This suggests that the function responsible for populating nodes to play is not correctly mapping the e g data to musical nodes. Let's break down the potential issues and solutions. One, check extract e g data for music function, ensure that extract e g data for music is extracting correct data from current e g data. Verified that the data returns is not undefined. Logon inspect the output of extract e g data for music to confirm that it is returning an object with valid values for delta, theta, alpha, and meter. Two, validate map e g data to nodes function, the map e g data to nodes function is actually not handling the extracted data correctly. Review this function to make sure it correctly interprets and maps the e g data to musical nodes. Check if normalized power and map power to node within map e e g data to nodes implemented correctly and returning valid node values, not undefined. Three, inspect data past the map e e g data to nodes, make sure the data passed for extract e e g data for music to map e e g data to nodes is in the expected format and contains necessary information for node mapping. Four, add debugging nodes, add console.log statements in map e e g data to nodes to log values of normalized delta power, normalized theta power, normalized alpha power, normalized beta power, and the final node values. This can help identify where the undefined values are originating. Five, review data conditions, if the e e g data does not fall within certain ranges or conditions required for mapping to musical nodes, the map e e g data to nodes function might return undefined. Ensure that the e e g data is suitable for generating musical nodes. Six, error handling, add error handling in map e e g data to nodes to catch and hal any unexpected scenarios or data anomalies. By thoroughly reviewing and debugging these parts of your code, you should be able to identify where back quote. The e g data for music object seems to be correctly populated with arrays for each e e g frequency band, delta, theta, alpha, beta, each containing a series of values. Given this data structure, if nodes to play is still ending up with undefined elements, the issue likely lies within the functions that process this data to generate musical nodes. Here are the areas to focus on. This function is crucial as it maps e e g data to musical nodes. Ensure that it correctly interprets the e e g data and maps it to valid musical nodes. Given that e g data for music contains arrays of values for each band, the function needs to correctly process these arrays. It might involve selecting specific values from these arrays or aggregating them in some way before mapping to nodes. Review the logic inside map e e g data to nodes, especially how it handles the arrays of the e e g data. If the logic expects single values but receives arrays, it might return undefined. Check the implementation of normalized power and map power to node. These functions should be capable of handling the array data from e data for music and return meaningful musical nodes. Add console.log statements within map e e g data to nodes to log the input e e g data on the output nodes. This will help you track down where the undefined values are being introduced. It might be helpful to log the intermediate steps in the node mapping process, such as the results of normalization and the specific node values being generated. I think those values are all nans. So where's this normalized power, normalized power value to zero to one range? It's just example logic. Okay, this is obviously wrong. Can we regenerate this function? Generate this function, please. Normalized power, a normalized power function should take single power value. Yeah, specific range is between zero and one. Just under normalization. Yeah, I have to update that function. This one obviously looks much better. It's still getting nans. Let's log this quickly. Logging console. It's a log bar power console. Log normalized power. Why is this greater? It should have normalized power here. Yep, and return norm power a console. Let's check this one. Is this function not being executed? Why is it called a set for sec? In JavaScript, the order of the functions is important. Why there's no error message. There is no error message. Why? Clearly a console. Is that function not being executed? Let's read this quickly. To a range of zero to one. This is done by subtracting the minimum value and then dividing by the range max min. Adjustments. You may need to adjust min power and max power based on the actual range of EEG data you're working with to ensure meaningful normalization. This function should work with the aggregated EEG data values like averages and normalize them before they are mapped to musical notes. A log console power value. Need to restart the app or something. Make sure it's not working. Start it again. It's a flask application. Start it from here. Reload page. So it's an NAN to begin with. That's not good. It's actually a spectral aim. Explain this quickly with call pilot in objects. So what objects data is check power, key and normalize power, EEG data. Let's over here place the objects in EEG data and let's do that quickly. You need to get that right. Control F5 to refresh this page of EEG data. Adjust the names of the objects. This looks correct, but here in power get NAN. If I ask it up call pilot about the whole, let's do this quickly. Something to do. Variable power is very rare. Not a number. This usually happens when much cooperation is performed but doesn't result in a meaningful number. But there was no much magical question. Performed with this power input. Okay, this is wrong. Should give me just one value. Yeah, this should be one value. Yeah, this just gives me any specific anything with power. What we're trying to do is normalize the power. I mean power is zero. Okay, now I understand that that normalization doesn't work. Power input is working. Okay, EEG data for music that works. And it's to control F5. So we have the power input. It's that equation. So the input is okay. I mean power. It's roughly 20 and 400. Then we normalize it to zero and one. Just to power it not to exceed the expected range. But what if they do then zero and 400. I think there's something wrong with this function. Power input, power will show the incoming power value, which is useful for diagnosing any issues with the data before normalization. Defining power range, you've set min power to zero and max power to 400. These values should ideally encompass the range of EEG power values you expect from your data. Clamping power values, the line power equals math dot max min power math dot min max power power ensures that the power value is within the defined range. This is a good practice to handle any outliers or abnormal values. Normalizing the power value, the formula power min power max power min power normalizes the power value to zero to one range. Logging intermediate values, you're logging both the clamped power value and the normalized power value, which will be helpful to verify that each step is working as expected. Returning the normalized power, finally the function returns the normalized power value. The issue here is that the power input to your normalized power function is an array, not a single numeric value. The function as written expects a single number, but is receiving an array of values instead. This mismatch is why norm power results in NAM. To fix this, you need to aggregate the array of power values into a single representative value before normalizing. This could be done by calculating the average, the peak, or another method that suits your application's needs. Here's how you can modify the normalized power function to handle an array of values. This modified function first checks if the input is a valid array and then calculates the average of the array's values. It then proceeds with clamping and normalizing this aggregated value. This approach should resolve the NAN issue and provide a meaningful normalized value for further processing. Yeah, we have nodes to play. It's working. We're having an error with the audio, audio context. Yeah, keep up copilot again. It's not very useful. I just don't know how to use it. Let's restart this page. Yeah, it should have played to begin if it was initializing audio context. The audio context was not allowed to start. It must be resumed or created after the user gesture. So that there went away when we interact with the page. So that should be fine. We have the nodes to play. We're having the error, shell script 122, error fetching e.g. that I want. Can you help us here? It should be a simple fix, not for me, for the bot. It's generating very slowly. Probably hit the limit soon. There was that 50 prompt per hour. I think. The error message you're encountering, Tibera, failed to set the value property on audio param. The provided float value is non-finite, indicates that an invalid value, likely NAN or infinity, is being passed to the audio param property in the play frequency function. This typically happens when setting the frequency or volume of an oscillator or gain node in the web audio API. Okay. Validate the inputs to play frequency function, ensure that the frequency and volume values passed to the play frequency function are finite numbers. Debugging, add console.log statements in the play frequency function to log the frequency and volume values before they use to set the properties of the audio context objects. It's something that the volume, the play frequency function suggests some console logs for debugging. This addition will help you identify if invalid values are being passed to the function. If you find that the values are indeed non-finite or NAN, you need to trace back to where these values are being calculated or assigned and ensure they're computed correctly. This might involve revisiting the functions that map EEG data to musical notes and frequencies. This again, the valid frequency, what? So the notes, the four notes should have been translated into one frequency, played as a chord. The error in valid frequency, undefined in the log node, undefined volume, undefined duration, undefined indicate that the play frequency function is being called with undefined values for frequency and volume. This typically means that the process used to map EEG data to musical notes is not working as intended, resulting in undefined values being passed to the play frequency function. Verify the node mapping process, ensure that the functions map EEG data to notes and any related functions, like normalized power and map power to note, or returning valid node information, including frequency and volume. It's possible that these functions are not correctly interpreting the EEG data or the data itself isn't suitable for the current mapping logic. The duration is okay, we have the notes, the musical notes, but the volume and playing frequency are undefined. So let's go with this quickly. Well, it's generating, we have a question I can't see properly. So it's about EEG classification feature extraction. Well, we do do that in another tool. I mean, this is it as well, because we're extracting features from EEG, so we can turn it into music in this case. So it depends. Yeah, features will be specific to the application. So this particular data has seizures in it. So then your feature extraction should be affected by, you know, what the data actually contains. So yeah, obviously. So I don't know if you're doing a specific EEG recordings, or like what are they called, evoke ERPs, evoke response potentials when you shine lights or play sound, then the features that you extract from it will depend on the experimental setup. That's kind of obvious. And pressure GPT-4 will explain it in a much better fashion. But here we're also doing a feature extraction, a frequency component we're doing feature extraction in terms of power in each frequency band. By the way, we actually don't need the whole spectrum. Yeah, we're extracting those for a band power and then turning it into musical notes somehow. So the mapping mechanism. So currently we do get four notes for each frequency band based on some sort of a basic linear conversion. By the way, most of the stuff is done in the JavaScript in front end. That's so that my server is not being overloaded. So you use your own electricity. Yes, this, the bodykills.com is three at the moment. But that means in this case, you're trying to do most of the processing on your end, on the user end. Quickly format document. Yeah, it's already getting ridiculously long. And I'm not sure if GPT-4 is able to hold so much context to give it a go. What did it say? The issue you're encountering is that while notes to play correctly contains musical note names C3, F3, B3, E4, the play frequency function is expecting numerical values for frequency and volume. This discrepancy is why you're seeing playing frequency, undefined volume, undefined. To resolve this, you need to convert these note names into corresponding frequency values that can be used by your audio functions. This conversion typically requires a mapping from note names to frequencies. Here's an approach you can use. Pretty sure we're already doing this. Create a mapping function. Implement a function to convert note names to frequencies. There's a standard way of doing this based on the notes position on the chromatic scale. Update the calling code. Use this function to convert note names in notes to play to frequencies before calling play frequency. Are we ready? Yeah, I'm pretty sure we're already doing this. I have to check the code again to search for frequencies quickly. Right, so play frequency, play frequency, frequency volume duration. Why do I need to map it myself? Surely that's just a function that can do it. Looking into it in a sec. There's another white as well because I'm getting a bit. Why do we need to map notes to frequency manually? Isn't there a function to do that? Yep, let's see. While it's generating, Maryland says it seems to have quite a few ways. Might have said it's about hand movements. Okay, hand movements. Hello. Yeah, so I actually have gear here to record the EEG. It's not quite off the shelf. It's bits and pieces of equipment. But in theory I could do it. I mean, I have one of those National Instruments DAC digital to analog converters or this case using the other way around the analog to digital. But it needs preamplifier. So ideally we would use an EEG preamplifier, but they are quite expensive. So you could just use audio, the preamp. It should cost like 10 bucks or so. So it should be, should look at getting one of those. Or if you have some lying around, please do let me know. I just don't want to pile up on gear that I would only use once. But yes, and then so if you did record the EEG or I don't know if you have a data set in public domain that I can look at, please do let me know as well. That'd probably be better because it will be a pre-recorded data. So I don't have to do it myself. There probably be data sets of this sort available. I do need it however in the public domain so I can use it on the site on YouTube etc. So yes, if you had your EEG recording and you're moving your hands in a certain way, I don't know this has to be like a specific way because main thing that you will probably have is a motion artifact when moving your hands. So you just have to be careful that whatever results you come up with are not contaminated by motion artifacts because you could say, oh yeah, I can see EEG correlation to hand motion but it's not actually coming from the brain. It's coming from the lead wires slightly moving and that's the feature that you extracted. Yeah, that will be a false result. So if you're going straight, it's a simple experiment. You should control for artifacts though. So you should, yeah, that would be absolute nightmare. So I do actually have a piece of equipment that doesn't have any lead wires. So ideally, your recording device, your logger should be on the head as well so there's no leads coming through. So when you move your hands, you're not picking anything from the leads. And then if you fix the head, so if it's obviously not moving and you only move the arms, then well, it should be really straightforward. You can do, yeah, GPT-4 could write code for you in like five minutes to process that data and know you probably for something simple like that, you do not need neural networks. Yeah, once you hear neural nets, machine learning, whatever, means something isn't working. Beware, red flag, things should be simple. Like I mean, whatever the neural net does, you will eventually have to explain what the mechanism behind it, like how is it actually working. And if you can't explain most AI tools, like if you can't explain, explain, you know, why an LLM with that many token, whatever the size works. And if you make it 80% of its, you know, amount of data for training, it doesn't work anymore. Well, you want to know that. Like, why does it happen? So in this case, the GPT is suggesting another function. Every time you ask it something, it suggests another function. That's why the code is also this long. It's about 200, almost 300 lines of code at the moment, which no, it should not be that long. But the supplement listener control have a place be the input. Yeah, we don't have a play, play option on the side. We still have like a big play button that will scroll through the file for you. We don't have it because it's overloading my server. So if you want a custom tool, please contact me directly. Otherwise, the button is gone, because every time, especially if someone, you know, hits play and it keeps playing, and you forget about it, it's overloading my server and you probably don't actually need it. So I want you to actively engage. Therefore, you'll have to use the mouse to some work. So I said, why do you need another function? It says yes, you're correct. Yeah, we do like functions like that. Do like basic mathematical equations. And yeah, I've seen this before. I would assume it is correct. We will know in a second. That's what I was saying about any other experiments as well. In this case, whatever I do, when people complain online that the GPT-4, whatever, generating false results, whether in this case, we can actually test everything it does. So we'll know in a second if it's working or not. So it sounds like we do need an extra function and we can use some sort of API in our JavaScript library, but we do want to do as much as possible as sales. This function calculates the frequency for a given node in the chromatic scale based on A4 equals 440 hertz. However, using a library like Tome, JS is recommended for more complex musical applications that greatly simplifies the process and reduces the potential of errors. Okay, yeah, obviously every line in here, it's a potential for error. Don't like this speed. Is it reducing an octave by one? If anyone out there knows how music works, you can really use some advice. But for now, just rely on GPT DNA, keep piling up on functions. But I think I don't understand this. Oh, it is an input. If node is an input of node to frequency function, why is constant called nodes is required? It seems like there might be a bit of confusion regarding the function and its usage. When you have a function like node to frequency, its purpose is to convert a musical note like C4, A3, etc. into its corresponding frequency in hertz, HZ. The function takes a single node as an input and returns its frequency. Here's a simplified version of such a function. Note is the input parameter. You pass a node name like C4 to this function. Node frequencies is an internal object used by the function. It maps node names to their corresponding frequencies. We will go with this function. It's not being used still. You don't need to provide this node frequencies object when you call node to frequency. It's a part of the function's internal implementation. The constant node frequencies is used by the function to look up the frequency for the given node. It might have misunderstood the question. The question is, over here, if node is an input, how much does it just use that equation to understand its function? You're absolutely right in questioning the need for a constant array of node names in the node to frequency function, especially considering the node name is already an input to the function. Let me clarify this. The array of node names in the node to frequency function serves as a reference to determine the position of a given node within an octane. This position is necessary to calculate the frequency of the node based on its distance from a fixed reference point, commonly A4, 440 hertz. Here's a breakdown of how the function uses this array. Identify the node position. The function finds the position of the input node in the array. This position indicates how many semi-tones away the node is from a, which is typically the starting point of the array. Calculate frequency. The function then uses this position to calculate the frequency. The formula it uses is based on the fact that each semi-tone step represents a frequency ratio of the 12th root of 2, approximately 1.05946. This is due to the equal temperament tuning system where an octave is divided into 12 equal parts, semi-tones. In the function, the line Konski number equals nodes. Index 0f, node, slice, 0, minus 1, is used to find the position, index, of the node in the octave. This is combined with the octave number to calculate the distance in semi-tones from A4, and then the frequency is calculated accordingly. In essence, the array is industrialist of nodes, it's a map to translate node names into their positions in an octave, which is a crucial step in calculating their frequencies. If you already have a method to determine a node's position, number of semi-tones away from A4, without using such an array, then you wouldn't need it. However, in most implementations, this array serves as an efficient lookup table for that purpose. How come GitHub compiler does so much generating text so much faster? I suspect it's actually not GPT-4, it's 3.5. The node to frequency function is used to convert musical node names, like C4, A3, F sharp 5, for their corresponding frequency values in Hertz, HZ. This is especially useful in contexts where you're working with musical nodes and need to interact with audio avers or perform audio processing that requires frequency values. Here are some common scenarios where you might use this function. Why are you giving me generic responses? Can you give a specific response to the code uploaded earlier? Yep, GitHub compiler gives a generic response as well. Certainly, let's apply the node to frequency functions specifically to the context of your earlier code. Based on the information you've provided, it seems you have a series of musical node names like C3, F3, V3, E4 that you want to convert into frequencies to play them using the Web Audio API. In your existing setup, you likely have a segment where you're iterating over an array nodes to play, which contains these node names. Here's how you can integrate the node to frequency function into that workflow. Yeah, I'll have real trouble integrating the whole thing. Integrate node to frequency. First, ensure that the node to frequency function is defined in your script. This function will be responsible for converting node names to frequency values. Modify the loop where you play nodes. In the part of your code where you iterate over the nodes to play array and call a function to play these nodes, you need to use node to frequency to convert each node name into a frequency. We have the node to frequency. Oh yeah, that's the current one. We have function to play frequency. Yes, we have the nodes to play. Yes, this is logging into console. Nodes to play. Is this correct? I don't know. This example usage probably won't work. It is an error. The error is if the function is not complete. Here, nodes to play is the array containing your node names. For each node, you first convert it to a frequency using node to frequency, node name, and then use this frequency to play the node. Please replace asterisk default volume asterisk and asterisk default duration asterisk with appropriate values or variables from your code. If your play frequency function doesn't use volume or duration, you can omit these arguments. This integration ensures that each musical node name in your nodes to play array is converted to the corresponding frequency before being played. Come on asterisk. Yeah, can I have those? Okay, sorry for that. That was loud. That was really loud, but this works. So that's also a wicked combination. Let's reduce the volume. Make duration 0.5, 0.1, okay. So now back to the question of would we need a neural network to actually make this sound better? The answer is I don't know, but still getting an error. HHS is another fairly kind of valid node name, is it? No, air fetching HG data, it was a check of this stuff. Let's try to play this four nodes. I think some of them might not be working. The error message fag error, assignment to constant variable suggests that there's an attempt to reassign a value to a constant variable within your node to frequency function or in the segment of code that calls it. In JavaScript, constants declared with const cannot be reassigned after their initial assignment. Based on the information provided, the issue seems to be occurring at EG, JS, 318 to 20, which is within your node to frequency function, where it could be related to how this function is being used in the code around EG, JS, 120 to 39. Review node to frequency function, go to the line indicated in the error message, line 318 in EG, JS, and check if there's an attempt to reassign a value to a constant. If a variable needs to be reassigned, consider using that instead of const. Check the context of function call, look at the lines around EG, JS, 120 to 39 where node to frequency is being called. Ensure that you're not trying to reassign a value to a constant variable in this part of the code. It's way too loud, isn't it? It's loud for me, it must be loud for you as well. Where's the play? I don't think this one's actually working. Yeah, because we're not playing nodes, we're not playing frequencies, search for volume. It's like half the volume. So let's check when there's no seeds or it's playing C3. These nodes F3, B3, E4. So it's just always playing the same nodes. It should turn off the whole spectrum by default. C3. Yes, let's play this. All frequencies combined. It's in those, play the same frequencies. Listen to it a little bit while it's always playing the same frequencies. Okay, we'll just continue next time.