 Test the audio quickly. See if everything works correctly or not. Can't hear myself. Do I have the audio? Okay. Can't hear myself. Well, good and not good. It's good because the stream is working. It's not good because I don't like my own voice. So I would normally don't monitor the sound continuously. So get back to pionic chaos in a sec and just cover a few random things that might be of interest. So yeah, thank you for all the people who subscribed. I have quite a few stakeholders at the moment. There is quite a lot of videos. This is including live streams as well. If anyone was interested in my YouTube analytics. The only thing I'm actually looking at is watch time hours. So I reached 660 hours and apparently there's another tool that can show you that. That's in the public domain. I uploaded about 286 hours worth of videos. So the ratio is about 2.3. It was a watch hour of uploaded hours. So hopefully that's my new KPI 2.3. Hopefully I don't go below or significantly below that. So trace that over time. This 2.3 number. See how noisy it is. So close that. Don't need this as well. We'll start developing in a sec. We'll still be doing the eG2 music transformation. I'll just touch briefly. So supposedly github co-pilot has this new feature. So they supposedly updated it. It's supposed to be much better now. Now there was a video on their official YouTube channel. And it mentions this workspace thing. Now it's actually disappeared for me. That's interesting. Now they have asked about VS Code where it's before. And this workspace as well. But I kept getting this array buffer location failed. So yeah, I was kind of complaining about this in the past. So on one hand Microsoft people, github people, co-pilot people did listen and changed this, addressed this. But for some reason it doesn't work for me. It didn't work in the video for the person presenting. So essentially the idea is that with this workspace command, which disappeared for me. That's quite interesting. That happened live. You meant to be able to access your entire project. So all the files which was not possible prior. Yes, this one does work. So if I go to a specific file, I might as well start with the go back to the agent music. So say we have this white encode minus file run it. Music is not going through with it. This sound is default. Let's try this again. Right. So this does work. That's the application that we were working for a while in the past. It's meant to be playing all these notes that currently we cannot hear because in the browser we also get this buffer not found. Which is actually surprisingly probably not related. But it's similar to the GitHub co-pilot message that I'm getting. The array buffer location failed. Actually I have to make this window small. It's just so we do not load that many notes, including 10 seconds at a time. Sorry, I had to sneeze. Luckily I was able to mute the microphone. Someone is asking about the sign language recognition project. So yeah, I did do it in the past. It didn't work so well. There's actually a kegel competition that you should just go and look up. I already mentioned this in the previous stream as well. You might be looking at it as well, but maybe you will. Because you're asking, probably doing a better job. Can I ask GPT-4? Yes, if you ask, I don't know if you have access to GPT-4, maybe not. That's fine. Can you give us a summary about sign language recognition and what was already done on that front, including the kegel competition? Maybe include the results from it. Was there any revolutionary breakthrough in the way it works? Yep. Let's see. It's going for Bing Search. Unspoken ASL kegel competition blogs at NVIDIA. Yeah, NVIDIA, there was actually someone who won that competition, who actually works for NVIDIA. That's probably why it's looking at that blog. Sign Language Recognition Technology has experienced significant advancements over time. The journey began in the 1960s with, signing exact English, a system designed to help non-signers interpret sign language into written English. However, this early system was limited due to its reliance on a single, predetermined set of hand movements. In the 1980s, researchers began developing computer-based sign language recognition systems. These early systems used cameras to record hand motions and pattern recognition algorithms to examine them. However, they had limited accuracy and vocabulary, making them inadequate for every... Nothing in... I'll stop that. Stop this as well. In the ChageBG, I was giving it the instructions to certain custom instructions. Yeah, I do have a response as short and succinctly as possible. It seemed to be ignoring that, with an exclamation mark there. Yeah, that name actually sounds familiar. Christoph Henkel. Just read that bit quickly. Regarding the kegel competition for sign language recognition, Christoph Henkel from NVIDIA won the top spot in a challenge focused on protecting American sign language finger-spelled characters and translating them into text. This competition was unique due to its novelty and a relative lack of research in this area compared to other fields like computer vision or speech recognition. Henkel's winning solution used an end-to-end model adapted from speech recognition AI, or mentored by the training dataset of over 3 million finger-spelled characters. This dataset was captured via smartphone videos and converted into coordinates representing the scientist's face, hand, and pose. This approach accounted for different styles, dialects, and speeds of finger-spelling, making it a versatile solution for various applications, such as web. Yeah, for memory, they're doing sign, but the problem with that is that they were actually limiting it to a specific frame, so only have hands alone. And there are now problems that they're only looking at one hand at a time, because apparently there's a dominant hand and a secondary hand, and sometimes sign language speakers swap hands, so they use primarily one hand, and the secondary is like aiding, or sometimes you only have one hand on the screen for spelling out. So there's all these complicated issues that make data collection and cleaning it up quite complicated. So I gave it a go, but essentially if you look at this person's code, it should be available on GitHub. It should give you a lot of information about how to do it. Yeah, I'll try it again, if it's an end-to-end model, including data collection or not. I think the data was pre-collected. Let's try last prompt. Was that solution end-to-end in terms of collection data real-time from a video footage and translating sign language to text, or was the data pre-recorded? Also, is the code available open source online? Any suggestion how to turn this into a Flask web application? Yep, peening itself. It's ignoring my request for succinct responses. I don't know. Then answer that training dataset with over 3 million fingerspelled characters using data augmentation, capturing these characters via smartphone videos. The data was then converted into coordinates representing a sinus face, and imposed in each video frame. The data used was pre-recorded, not real-time. As for the code's availability, it's not explicitly stated whether it's open source or not. For integrating this technology into a Flask application, one would typically use a pre-trained model to process video input, assign language to text, and then implement this functionality within a Flask web framework for user interaction and display. All right, if I have the training dataset available, I'm pretty sure it's available somewhere. Yeah, we'll try giving it another go, but yes, it might be quite a lengthy project. I don't know, it kind of depends where you are at this stage. Have you done anything at all? Yeah, so that's the beginning of it. You start with images like that. Well, then you'll have to kind of either learn sign language. Obviously, it will help if you actually know sign language in whichever language. I was in it to use those pre-trained videos that actually had someone who can understand sign language, translate the video footage into text, and then you train a model based on that that was the competition was. So it kind of depends where you want to start. Yeah, probably if you're looking at another language, that pre-trained model will not work. So it was done in an ASL, American Sign Language. So you will have to essentially train from scratch, depending how different the languages are. And let's get back to this. Start a new core pilot as well. And then GPT-4, we have this code. The problem with it is that I cannot hear any sound. I've been struggling with this one for the last couple of days. Do you think we'll be able to do it in, say, half an hour or something? Probably not. We'll give it a go anyway. The bottom of the code to say that I cannot hear. The sound is not being played. The button does work. So I have this main.js that is important. Yeah, so the problem is currently how this notes. So problem with them as well. Clear console. It's also playing only two of the notes. There should be more of them. Yeah, there's some sort of translation. There's some sort of syntax thing. Let's pop this stuff in starting from index HTML, then we'll do JavaScript. And then the Python code to explain core pilot. Explain this as well. So I have GitHub co-pilot on the left, GPT-4 on the right. Yeah, I did say this code is for reference only done. Generate anything, but it still did. Which is okay. So the HTML is not where the problem is. This is well co-pilot like this. Hopefully it picks up on the prompt at the end of the code. Pop it into GPT-4 as well. Yeah, it's something to do with the notes not being loaded properly. Yeah, GitHub is giving generic responses. Whereas GPT-4 is actually giving more specific, trying to solve the problem. Still generating. Yeah, it's something with the note, the names of those notes. Essentially you have to look at the documentation, do that. That's a good question. Essentially the sound font. So that's where the JavaScript is. It was built just recently. No way to say that's not the one we're using. Yeah, it's this one here. There's a link in this thing now. It's like a JavaScript where we're loading the Yano. So to understand why do we need to load these notes manually. Do we need to load them manually? See what other notes potentially have all of those as well. Why don't you just load all the notes? Is it possible to load all the notes? Yes, it is possible to load all notes for an instrument using a sound font library, but it is important to consider the trade-offs. Memory usage, loading all notes for an instrument can consume a significant amount of memory. This might not be an issue for modern computers, but it could be problematic for devices with limited resources. The more notes you load, the longer it will take for the instrument to be ready. Users might experience a noticeable delay when the page is loading. Network bandwidth, each node loaded requires a network request fed to the audio file. This could increase data usage and loading guides, especially on slow internet connections. If these trade-offs are acceptable for your application, you can modify the load piano function to load a full range of piano notes. Typically, a standard piano range is from A0 to C8. Here's an example of how you might modify the function to load all notes. Okay, we have to try that. In this updated function, we create an array node to load, containing all node names from A0 to C8, taking into account the exceptions of the extremes of the piano range. This function then attempts to load all these notes in the piano instrument. Remember to handle the increased loading time and potential memory usage in your application. Yep, again, it's actually playing all the test notes. Let's reduce the window size to then select a different channel. Just for the heck of it, I think it's already should be playing the notes, but still skipping stuff. Cool. This is from the console. This is the full range of notes for each octave, including both natural notes, like C, D, E, and sharp notes, like C hash, F sharp, A sharp. To clarify, here's how the loop works for these specific notes. For C sharp 6, when the loop reaches the sixth octave, octave equals 6, it includes C hash in the notes to load, creating C sharp 6. For F sharp 6, similarly, in the sixth octave, F hash is included, forming F sharp 6. For A sharp 5, when the loop is in the fifth octave, octave equals 5, which includes a hash in the notes to load, resulting in A sharp 5. The loop iterates over a predefined array of note names, note names, that includes both natural and sharp notes for each octave from 0 to 8. This ensures that all variations, including the sharp notes, are accounted for in the range of a standard piano. Okay, so why am I still getting an error and some of the notes being skipped? It's something to do with the syntax of this thing. Let's try these two variations. One will have the basic ones. I suspect all of these ones will be played okay. Wait, let's restart the page. So these four are playing fine. It's something with the syntax. Yeah, now I suspect only A5 will be played. So all those are being missed. Yeah, quickly. So three of them do not work in the standing context. Well, that's right. So missing is the conversion happening. Play segments, convert the EG to music, or the EG data, update data, load piano. Yeah, we need to verify the names. So the problem is these notes, the one that have sharps, all the sharp notes, not work. These notes will work. So it's a problem with the syntax. The syntax issue. In this modified function, before attempting to play a note, it checks if window, piano, buffers, note exists. If the buffer is not found, it logs a warning and skips playing our note. This should prevent the errors you're seeing and provide more insight into which notes are missing. Okay, is there some sort of syntax problem? Because the following will play okay and these ones will not play, will give a missing buffer error. I'm not sure if this note's not playing while this one's playing fine, could be due to naming convention. For instance, instead of using C sharp 6, the library might expect C sharp 6 or C 6. This is a common issue as there is no strict standard for naming sharps and flats in MIDI libraries and sound fonts. Here's how you can troubleshoot and fix this. Check the naming convention for sharps. Look at the documentation or the actual file names in the sound font to see how sharp notes are named. They might use a different character or format. Update your note array. Once you know the correct format for sharps, update your note array to match. For example, if the sound from the user sees for C sharp, your array should be C6F6 as 5. Did I just try it manually? What are the other options? How do I get this character? Yeah, it's an actual thing. So would one of them play to comment that save to rerun this? No, none of them plays. Okay, let's see. Load all variants. As a workaround, you should attempt to load all common variants of each sharp note when initializing your piano. This would cover whichever naming convention the sound font uses. Here's an example of how you might adjust the load piano function to load common sharp note variants. Yeah, but the other ones do not work. They're case sensitive. In this function, both the standard and alternative sharp note names are added to the notes to load an array for each octave, ensuring that whichever naming convention the sound font uses for sharp notes, those notes are loaded. Okay, but they don't actually work. I'm not sure there's much point in loading them. Oops, case sensitive or something. Ah, it's these little things. Keep me stuck forever. Still don't work. How do I actually check? For example, middle C, C4 is 60, C sharp 4, DB4 is 61, and so on. You can modify your code to use MIDI note numbers, which will bypass the issue of naming conventions entirely. Here's an example. Convert note names to MIDI numbers. First, convert your note names to MIDI note numbers. There are many online tools and tables for this, where you can write a function to do the conversion based on the piano's loud. Okay, so I should just replace them in numbers, but how do I know which number is which? Play using MIDI numbers. Modify the play sequence function to play notes using MIDI numbers instead of note names. Here's a simple example of how to modify your play sequence function. In this version, MIDI numbers is an array of MIDI note numbers. You will need to update your code where you define the notes to play to use MIDI numbers instead of note names. Using MIDI note numbers should result the issue with different naming conventions for show-ups and flats, as MIDI numbers are universally standard. At this stage, I'm happy to try anything and essentially, the problem is that these notes will play okay. All the other notes, let's select channel 7, for example, should already play, start playing the notes, but still skipping ones that should be played okay. There's still something else going on. It seems to be two separate bugs Yeah, I can spend already two strings on this and still not able to solve this problem. The piano is overloading a grand piano. It's working okay as you can hear. Then there is a function translating this EG signal into those musical notes. So when you scroll through it, it should play the notes. That's playing them okay. Should be able to fix this. And what is it? We left with seven minutes, six minutes to this stream. It seemed to be two separate problems because this skipping business is coming from a different part of the code. There's this play sequence. Let's go with this suggestion. I don't have anything better, play sequence. Is this complete? Close this for a sec. We surely have to modify something else. We need to modify the backend as well for this to work. This bit is done in the backend, isn't it? We haven't shared the Python code yet. We have time to do it now. Yes, we have MIDI numbers instead of nodes, but these nodes generated by the backend. We'll have to change quite a bit. Modify the play sequence function to accept an array of MIDI numbers. Here's an example based on the previous version of the function. When you want to play a sequence of notes, use their MIDI numbers. For example, if you haven't already, ensure that your LoadPiano function loads a full range of notes. Since you're now using MIDI numbers, we want to make sure all possible nodes are available. Let's just go with this, but that's not it. We need to modify the backend as well. Let's start from the bottom. I don't think from memory we haven't shared the Python code yet. In LoadPiano, we just load all the nodes, so we don't need those names anymore. Supposedly, place all that play sequence of the MIDI numbers. Let's put this also to format document. Right now, we need to change the play sequence function. Since the complete function seems to be, the app is not currently running. We have to start it again. In case it's playing full notes, the question is, what will it do when we move the EG scroller? It's actually working, surprisingly, without changing the backend. It's actually quite a strime window of 30 seconds. Just playing the same thing over and over again, isn't it? That's not good. I was getting happy too early. That's essentially just playing these full notes over and over, which is not cool. I don't want to play anything at all when I scroll the file, because we need to change the backend. I have to update. I need to update this. This is the one that's actually generating the notes. Yeah, that's general stuff. We'll just let it generate the general description. Your Flask app, app.py, is set up to serve a web application that processes EEG data and converts it into musical notes. It has three main components. Serving the main page, the root serves an index, HTML page, EEG data endpoint, the getEG data root fetches EEG data based on the provided start, count, and channel parameters. It uses the load underscore data function to read from a specified data file. EEG to music conversion endpoint, the convertEG to music root takes EEG data and converts it into musical notes. This conversion process involves scaling EEG frequencies and mapping them to musical notes using linear interpolation. Potential updates or enhancements, error handling and logging, improve error handling and logging for better debugging and maintenance. It's good to log errors, but also to provide meaningful feedback to the client in case of failure. Yeah, let's stop reading that because that's generic stuff. The last response was to generic. We updated the JavaScript to use MIDI numbers instead of notes. Do we need to update the back end to produce numbers, MIDI numbers instead of notes as well? Yep. Yeah, there's a significant change, isn't it? Uploading the JavaScript to use MIDI numbers instead of note names is a significant change that affects how your application interprets and handles musical data. If you have made this change on the client side, JavaScript is a good idea to reflect this change in the back end, Python, as well for consistency and to ensure a smooth data flow between the front end and back end of your application. Here's how you can approach this. In your convertEG to music root, where you currently convert EEG data to musical notes, you will now convert it to MIDI numbers instead. The process will involve two main steps. Mapping frequencies to MIDI numbers. Instead of mapping EEG frequencies to note names, you map them to MIDI numbers. The MIDI number for a given frequency can be calculated using a formula that relates frequency to MIDI numbers. Return MIDI numbers. The endpoint should now return an array of MIDI numbers instead of note names. Here's an example of how you can modify the conversion process in your Python code. Okay, yeah, so we need to change the back end. It was kind of to be expected. Should we do it now? Since you have already updated the JavaScript to work with MIDI numbers, ensure that it correctly handles the data received from the back end. The JavaScript should expect an array of MIDI numbers from the API and use them directly for playing music. After these changes, it's important to thoroughly test your application to ensure that both the front end and back end are in sync and that the conversion from EEG data to MIDI numbers and then to music playback is working as intended. If it can just read, can you regenerate the whole working Python code with the necessary updates? Yep, so we have this extra function. There's a lot of magic numbers, a couple of magic numbers in it. It's supposed to convert MIDI frequency to MIDI. It's not good enough. What is a conversion? Yeah, conversion, previous code. Certainly. I'll integrate the changes for converting EEG data to MIDI numbers into the existing Python code for your Flask application. The updated script will include the functionality to handle EEG data and convert it into MIDI numbers, which will then be sent to the front end. Okay, so we're still using interpolation. Trying to make sure there's no any placeholders. Converting EEG to music and that's the same code. Okay, there's a placeholder there. There's this range and scaling the MIDI. Should have been doing this already. It probably wasn't doing it correctly. Converting the frequency didn't already have that code. Oh, that should be okay. And this is a Flask application, right? Let's put this one in, try integrating ourselves. EEG data endpoint, get EEG data. This endpoint remains unchanged and continues to serve EEG data based on requested parameters. EEG to MIDI conversion, convert EEG to music. This endpoint now uses the scale underscore EEG underscore underscore MIDI function to convert EEG frequencies to MIDI numbers. It returns these MIDI numbers in response. Data scaling function, scale underscore EEG underscore underscore MIDI. This function takes EEG frequencies, scales them to a specified range of MIDI numbers and returns the scale values. You can adjust the MIDI range as needed. Loading EEG data, load underscore data. This function should be implemented with your logic for loading EEG data from a file or other data source. Please make sure to replace the placeholder comments and functions with your actual data loading and processing logic. Additionally, test the application thoroughly to ensure that the conversions and data flows work as expected. That's the regular test. When we scroll through the EEG, nothing. The piano was loaded with all the notes. Still thinking about it, isn't it? There was this something wrong. Just reload this quickly. No, piano is loading quickly. Let's make the window size 10. Any other placeholders or something? Scale frequencies, return MIDI numbers. Can't hear anything. Come on GPT-4, fix this for me. Need to start troubleshooting again. Just wondering. These notes. Standout play sequence. That's the function. It's MIDI. It's the button play MIDI thing. It's played when scrolling through EEG. Small bar is moved. That's right. Thank you, GitHub co-pilot. I'm not sure how much progress we made today if any might just continue this next time. Let's make some adjustments to your code to ensure this happens. Correct data handling in Convert EEG to music function. Ensure that the data received from the Convert EEG to music endpoint is correct if processed. It looks like you're expecting data. Notes, the base...