 So we are trying to map this EEG into music. We have this recording that's a seizure isn't it? That's a seizure, that's after seizure, seizure onset. Those work. They are available on the slide. You can go check out by on keo.com to check out previous versions of this tool. Now we try to add a feature which will play music when you scroll through this file. The current conversion does not work. So we have a bunch of function extracting EEG data for music. So we're extracting the different frequency bands. Yet this function would not work because we don't have spectrums in the data. Check what we are getting this here. No, those should be working okay. Now let's check if this function actually are returning anything. I think it does not. So we're logging in a console and then it's an object with the separate components. How do we correct this? Wasn't 110. Okay, we have the data. So that's one is working. Then nots to play. Okay, now we have undefined all notes. Notes to play, undefined, try both. What we're trying to do is normalize the power. I mean, power is zero. Okay, now I understand that that normalization doesn't work. Power input is working okay. EEG data for music that works. And it's to control of five. So we have the power input. It's that equation. So the input is okay. I mean power. It's roughly 20 and 400. Then we normalize it to zero and one. Just above it, not exceed the expected range. But what if they do instead of 400? Is there something wrong with this function? Yeah, we have notes to play that's working. We're having an error with the audio context. When we interact with the page, so that should be fine. Notes to play. Then we're having the error. She'll script 122. Error fetching EEG data, what? New help us here. Should be a simple fix. Not for me, for the bot. There's something with the volume. Play frequency function. Bullet frequency, what? So the notes, the four notes should have been translated into one frequency. Play as a chord. The duration is okay. We have the notes, the musical notes. But the volume and playing frequency are undefined. Because we're extracting features from EEG. So we can turn it into music. Features will be specific to the application. This particular data has seizures in it. So then your feature extraction should be affected by what the data actually contains. But here we're also doing feature extraction. Frequency components, we're doing feature extraction in terms of power in each frequency band. By the way, we actually don't need the whole spectrum. We're extracting those four band power and then turning it into musical notes somehow. So the mapping mechanism. So currently we do get four notes for each frequency band based on some sort of basic linear chord version. By the way, most of the stuff is done in the JavaScript front end. That's so that my server is not being overloaded. So you use your own electricity. This bar chaos.com is free at the moment. But that means in this case, you're trying to do most of the processing on your end, on the user end. Just search for frequencies quickly. Right, supply frequency. Why do I need to map it myself, surely? It's just a function that can do it. Looking into it in a sec is another might as well, because I'm getting a bit... Yeah, we do like functions like that. Do like basic mathematical equations. And yeah, I've seen this before. I would assume it is correct. We will know in a second. That's what I was saying about any other experiments as well. In this case, whatever I do, when people complain online that the GPT-4, whatever, generating false results, in this case, we can actually test everything it does. We will know in a second if it's working or not. It sounds like we do need an extra function. Yeah, we can use some sort of API in our JavaScript library, but we do want to do as much as possible as sales. Okay, yeah, obviously every line in here, it's a potential for error. Unlike this speed, set reducing an octave by one. Yeah, if anyone out there knows how music works, really use some advice. But for now, just rely on GPT and AI, keep piling up on functions. Now, the other thing I don't understand is the node is an input. If node is an input of node to frequency function, y is constant called nodes is required. If we go with this function, it's not being used still. Over here, if node is an input, we should just use that equation. We have the node to frequency. Oh yeah, that's the current one. Yeah, function play frequency, yes. We have the nodes to play, yes. It's logging into console, nodes to play. Is this correct? I don't know. This example usage probably won't work. It is an error. It's common this error usage. Yeah, I can have those. Okay, sorry for that. That was loud. That was really loud, but at least it works. So that's also a wicked combination. Let's reduce the volume. Make duration 0.5, 0.1. Okay, so now back to the question of would we need a neural network to actually make this sound better? The answer is I don't know. Still getting an error. HHS 130, second and valid node name, is it? No. Airfetching HG data, probably with the data. Yeah, we have to check all this stuff. Let's try to play these four nodes. I think some of them might not be working. I don't think this one is actually working. Yeah, because we're not playing nodes. We're not playing frequencies. Search for volume. Half volume. So let's check when there's no seeds. It's playing C3. These nodes are F3, B3, E4. It's just always playing the same nodes. Yeah, should turn off the whole spectrum by default. C3. Yes, let's play these four frequencies combined. Yeah, it seems to always play the same frequencies. Copy this into the log text while it's always playing the same frequencies. Okay, we'll just continue next time. Bye.