 Well, there we have the sound and everything, you can only check it from here, go live. So normally do it on another device, just pop it in a different window for a sec. Hello, Rohan, I haven't started yet, I'm starting a second. Checking my sound and everything, can you hear anything? Yeah, that's the problem of testing on the same device that the sound will go into the stream, which is not cool. Okay, I'll just close this. Probably do we have to monitor on another device? Music. We'll start by reviewing some literature, some papers on EG to music conversion. Using these plugins in GPT-4, can I get this Scholar AI, Scholar AI. So GPT-4 using the plugins, and that's how it looks like today. It will probably look very different tomorrow, no one knows. But when you use the plugins, you can select your plugins. I have these three selected. I did a review before this, code your later. It was kind of working okay. It was integrating with my Visual Studio Code. But yeah, I do have a GitHub co-pilot as well, so yeah, so I'm not sure which one's better. You can probably guess. And then, yeah, I haven't used this YouTube summaries. It actually wasn't working, I think. I have to try it again. But yeah, this Scholar AI says, Unleash Scientific Research. Search 200 million plus peer-reviewed papers and explore images and text from Scientific PDFs. Okay, that's fine. So when asking for EG to music generator, it's generated this general overview. That's fine, we covered it in previous streams. Keep suggesting anything with EG or ECG. Keep suggesting M&E library, which we're actually trying to avoid for various reasons. For many, because we want to do things ourselves. And I think M&E is quite outdated as well. So that's the same problem with papers at the end. Yeah, so here it's using this Scholar AI. It has its own API. It's using these keywords. The choir is paper on EG to music generation, sold by relevance. It's not why it's given offset of four, assuming there's three other papers. It did give a list of papers. However, I wasn't able to replicate that search myself. So if I do a general search, EG to music generator, I get a whole different list of papers. Well, at least the first one is not on the list. The other four are actually not quite relevant. So I guess it's because of the brain computer interfaces, so I'm using a keyword in there. Yeah, so then I tried doing an advanced search, which you can do from here. Where is it again? Why is it going away? Is it or or? Oh, come on. Well, that's how advanced search line looks like in Google Scholar. Really, because you change, it's really hard to get there. Yes, we have music generation. Should be in quotation marks, I don't know. But anyway, this Scholar API is obviously different. Well, because the actual Scholar Google.com doesn't give me the same papers. So that's something to further explore. Maybe we're spending like five minutes on it. So the first paper is the only one that is actually somehow relevant. But then it's actually talking about a more clinical case. Very generic and a pre-processing EG signal. You can see by looking at this figure, you can kind of guess what year is this paper from. So doing wavelength decomposition, I think they actually go into the details of how it's actually done. We have it reconstruction, filtering into subbands, extraction of discriminative features. To actually say what the features are. I don't think so. I had a look at this paper before, comparison between wavelength and Fourier transform. Okay. I can tell it's quite generic stuff. Okay, so going straight into methods section. And make it bigger. Let's read this quickly. Processes performed using EEG cap that consists of EEG electrodes, which will be in accordance with international system of electrode placement. The arrangement of electrodes will be in a way that they're received at correct position on the scalp IE occipital lobe. Okay, no information there. A look at the project structure which has an amplifier pipeline with differential instrumentation amplifier attached to an operational amplifier and filters for removing noise. Then the signals are passed onto the microcontroller for digitalization. After all the conversion signals are sent to computer via USB, then signal processing is done and desired result is tried to obtain. It's illustrated by the block diagram figure five. It feels like it was written by a change of PTO survey. There's another very low quality figure. Yeah, it's just a very low res. Can barely read it. Might close that. Okay, where's the actual results? An intelligent way. Yeah, if you have an intelligent way in the title, that should be your red flag. CVT using break-through interface. Okay. Yeah, it's like very generic stuff. There's actually no details on what the system actually is. Is it the research conference? Sorry, I should have looked at this first. This is the conference paper. It's well-closed. But that's the first way as for papers. That's the first thing that this add-on in GPT-4 Scholar AI gives us. First, we're asking for papers. It's not really a paper. And then other things are not really relevant. Because it's mainly focusing on break-through interfaces. This is games. So no music mentioned. So yeah. So I've played with this before. It seems like just better at this stage use Google Scholar directly instead of this Scholar AI. Exactly what it does. When we do use Scholar AI, there are actually more irrelevant papers. Let's have a look at the first one from 2006. Wait a second. This has a very similar image. I'm feeling the authors are different. The PDF is available. There's some Japanese website. Yeah, this actually has more details than that. Not too much. The image is not there anymore. So it's estimating emotions. Doing a music generator out of emotions. It's interesting. Yeah, it's reading the whole thing. Anyway, let's get back to what we were doing. Currently, music was playing quite a lot of time. But without much progress, it was generating something. It was generating noise. It wasn't very musical. So I thought we'll give it another go. This time, we'll start not from scratch as previously. So previously, we did learn quite a bit about music generation, MIDI, music fonts, things like that. You can see previous videos. The idea here is to take an existing tool. This is already on the website. You can go try this out. We'll also start a new GPT-4. We use GitHub Copilot as well. Copilot has trouble keeping overall context. And I have to do it directly with GPT-4 as well. It's generally much faster as well, which makes me think it's not GPT-4. It's 3.5. Do I use two functions? Right, this is the coordinates. The function takes four parameters, low-cut, high-cut, assembly frequency, and order. It has order 5 by default. The function is calculated in Nyquist frequency, which is half of the sampling frequencies. It then normalizes the low-cut and high-frequency by dividing them into Nyquist frequency. The battle function from side-by-signal modules then used to calculate the coefficients of the bottom-war filter. We get the coefficients in A and B. And the next function is actually using those coefficients. No, it does not. Why not? No, yes, it does. It has the same inputs. Then it's actually generating B and A and using L-filter, one-dimension IAR or FIR, which one is it by default? Sure, this would be parameters B array, array like... Okay, it's not actually telling us which one is it. Is it IAR or FIR? Sure, it's IAR. That's just a guess. We have the same description by GPT-4 for comparison. Yes, GPT-4 has a better, more generic explanation, which is better for purposes. Filter data is returned. It didn't actually explain the whole code, did it? Yeah, GPT-4 giving a more general, overall description, which is better. But we'll get into actual prompts in a second. This is obviously the HTML. We probably don't need to specify the name. Maybe we should index HTML. We used to always say this is for reference only to not generate anything else, but that the prompt has kind of mixed results. Sometimes GPT-4 just ignores it outright. Let's explain this with GitHub Copilot, GPT-4 as well. Quickly. Then we'll also have JavaScript. It's called e.g. JavaScript. It's a static folder. Remove this thing at the bottom. Again, that prompt doesn't... It's being ignored most of the time or some of the time. Let's see what it gets to say because we haven't dealt with this tool in a while now. Data filtering and processing functions. Functions are defined for applying a button with bandpass filter. The script includes the capability for wavelet denoising using the PyWT library. Flask roots. The main root serves an HTML page. Get underscore data. Root handles get requests to process and return IEG data based on specified parameters like channel, filter size, filter settings, and wavelet transform options. Data processing in the get underscore data root. It extracts data based on the start second and window size from the request. It processes the IEG data by applying the trending, wavelet denoising, and calculating the fast Fourier transform, FFT, for spectral analysis. The script filters the data into different frequency bands, Delta, Theta, Alpha, Beta, and computes the power spectral density, PSD. It returns processed data, including raw and denoised signals, spectrums, signal-to-noise ratio, SNR, and mean squared error, MSE, as JSON. Error handling. There's a validation for the wavelet transform type, returning an error if an invalid type is provided. Running the app. The script is set to run the Flask app in debug mode. This script is well structured for a web-based application aimed at analysing and visualising IEG data with options for filtering, denoising, and spectral analysis. It's good GPT-4 says it's a good script considering it wrote it itself. The provided HTML document is for an index page of a web application that visualises EEG, electroencephalography, data. The document has several key components. Head section. Meta tags for open-graph protocol, providing title, description, and image for rich previews on social media platforms. Responsive design settings and character encodings set to UTF-8. Links to external starships in the Plotli JavaScript library for data visualisation. Header. A navigation bar with a logo, menu toggle for mobile devices, and links to home, about, and contact pages. Body content. A main section containing controls for data visualisation such as wavelet denoising options, wavelet types, levels, and a trend toggle. Spectrum control options including whole spectrum display, frequency scale, and filter order. General controls like channel selector, window size, and a scroll bar for navigating through the data. Plot containers for displaying the raw signal and frequency spectrum. Output containers showing the signal to noise ratio, SNR, and mean square error, MSE. Content section. Detailed descriptions and features of the IEG spectrum visualisation application. Explanations of the different aspects of the IEG data visualisation, including technical details and data source information. Closing note discussing the application's utility. Scripts. JavaScript for additional functionality and a script for Google Adsense ads loading. This API. The JavaScript file you provided is a crucial part of your IEG underscore spectrum underscore 16 underscore Chan underscore denoise underscore music web application. It handles user interactions and data visualisation in the browser. Here's an overview of its functionality. Initial setup. The script listens for the DOM content loaded event to ensure it runs after the full HTML document has been loaded. It initialises variables for application state, such as is playing, current second, and update interval. DOM element references. It obtains references to various HTML elements like sliders, checkboxes, and selectors that users will interact with. Environment check. A check to determine if the application is running in a production environment, which influences the API endpoint used for data fetching. Data fetching and rendering. The fetch data and render function is defined to fetch data from the flask back end and render it using plotly, JS. This function constructs a query string with parameters from the UI elements and sends a request to the back end. Upon receiving the data, it uses plotly to plot raw EEG signals and frequency spectrums and displays SNR and MSE values. Event listeners. Event listeners are added to various UI elements. When a user interacts with these elements, e.g., changing a value or tumbling a switch, fetch data and render is called to update the visualisations with new parameters. A listener on the file scroller allows users to navigate through the EEG data. Listeners on other controls like the channel selector, window size slider, and wavelet settings trigger data fetching and re-rendering based on the selected options. Initial data rendering. Finally, fetch data and render is called at the end to load and display data when the page is first loaded. This script effectively integrates user interface elements with data processing and visualisation, offering an interactive and dynamic experience for users exploring EEG data. It makes good use of a synchronous JavaScript for fetching data and leverages the powerful plotly library for rendering complex data visualisations in the browser. The whole spectrum. We have the delta, theta, alpha, beta. It seems to work okay. This is during seizure. It's actually an error at the edge there. And this is during non-seizure. That is the weird thing in it. Yeah, this is the end of the seizure. So there's seizure, there's no seizure. It's a tricky business because the spectrum is actually similar. But it's similar because of different reasons. You have to look at the power in it. So yes, when I improve this tool, but we also want to generate music out of it. So we don't have to review EEG manually like this. Just hit the play button and listen to it. Hopefully it's not too annoying to listen to. Need to write prompt for it. This tool is working well. It's fast and responsive. I would like to add an option to play musical notes. When we are scrolling through the file, the notes could be retrieved out of the power in each frequency component and potentially played as chords. So multiple notes together. Let's just see what it says. When you write something unstructured like that, the good thing about it is that it also organizes it. It says integrating a feature to play musical notes based on the power of frequency components in EEG data. This would turn the data into an auditor experience. High level data mapping, frequency to note mapping, scaling and the power of each frequency could control the volume or dynamics of the corresponding note. Chord generation based on selected frequencies from chords can be done by grouping notes that harmonize well together. That's the help from a musician here. Implement logic to decide how chords change over time. Audio synthesis. I want to play it back using an audio API to play the chords. Integrating with existing application. Add a play button. Using a play button. Yeah, we don't want to do that. The question is do we just want to play frequencies by actually musical notes which we tried doing previously. So in the code we already should have the power for each EEG frequency component, delta, theta, alpha, beta, when scrolling through the file, can we use this to generate musical notes? Kind of the same thing again, but it's not actually doing it. So if we assign a specific note or a range of note to each EEG frequency band, it doesn't mean that we will only have four notes to actually start generating a JavaScript using Web Audio API. Yeah, this one's using sinusoids. If you assign one note to each EEG frequency band, delta, theta, alpha, beta, you would indeed only have four distinct notes each representing the activity within its respective band. However, there are several ways to expand this into a richer auditory experience. Node variation within each band. Instead of assigning a single node to each band, you can map the power within each band to a range of notes. This way, variations in power lead to different notes within the band's range. For example, delta band, power variations mapped to a range of low notes. Theta band, power variations mapped to a range of slightly higher notes, and so on. This approach allows for more variability and a more complex sound as the EEG data changes. Choose a musical scale and assign a range of notes from that scale to each frequency band. For instance, Eh, this sounds like a better idea. Each band will play notes within its assigned range based on the power in that band. If you're playing notes from multiple bands simultaneously, you're effectively creating chords. The specific notes played from each band at any given moment can combine to form a chord, which changes as the EEG data changes. You can add a rhythmic element by varying the duration and onset of each note based on the EEG data, adding another dimension to the auditory representation. Harmonization, implement rules for harmonizing notes played simultaneously from different bands. Octave variation, play the same note but in different octaves depending on the intensity or other characteristics of the EEG signal. This approach provides a dynamic and engaging way to represent EEG data through sound, with each EEG band contributing to a composite auditory experience that changes with the EEG data. Another main question is, should we do this in front end or back end, mainly? Or both? Or both? Siding whether to implement the EEG to music feature predominantly on the front end or back end, or both, depends on various factors including performance, complexity of processing, real-time responsiveness, and scalability. Let's consider the pros and cons of each approach. Front end implementation, pros, real-time interaction, directly mapping EEG data to sound in the browser allows for immediate auditory feedback as the user interacts with the data. Reduced server load, processing is done on the client side, reducing the load on your server. Client side libraries, leverage powerful client side libraries like the Web Audio API for audio synthesis and playback. Cons, performance dependency on client's device, the processing load is entirely on the user's device, which might not be ideal for low-end devices. Browser compatibility, you need to ensure compatibility across different browsers and devices. Limited processing power, complex audio processing might be limited by the client's hardware. At this stage, because we're running the server for other things as well, we want to do as much as possible on the front end. So yes, the performance will highly depend on the device you are using to access the web application. Back-end implementation, pros, controlled environment. Processing on the server side can be more predictable and efficient, especially if you have powerful server resources. Complex processing, more complex algorithms for sound synthesis can be implemented. Consistency across devices, independent of the client's device capabilities. Cons, network latency, requires sending data back and forth between the server and client, which might introduce latency, affecting real-time responsiveness. Server load, increases the computational load on the server, which could be a concern if many users access the service simultaneously. Yeah, probably go for some sort of hybrid, well, obviously, for hybrid approach. Question is, what's the ratio between back-end and front-end processing? Pros, balance of load, distribute tasks based on their nature and complexity. For instance, basic audio processing and playback can be handled in the front-end, while more complex data analysis and transformation can be done on the back-end. Optimized performance, utilize the strengths of both server and client side processing. Cons, complexity, requires careful coordination between front-end and back-end, which can increase the complexity of your application. Front-end heavy approach for scenarios where real-time user interaction and responsiveness are crucial. This is ideal if the audio processing is not overly complex and if you can ensure reasonable browser and device compatibility. Back-end heavy approach for scenarios requiring complex data analysis and transformation, or when you need to manage the computational load away from the client. Hybrid approach if you have both complex processing needs and a requirement for real-time interactivity. This approach can help balance the load and optimize performance. In conclusion, the choice largely depends on the specific requirements and constraints of your application. A front-end approach is typically more suitable for real-time interactive applications, while a back-end approach is better for heavy processing tasks. A hybrid approach can offer a balance between the two, leveraging the strengths of both client and server side processing. Okay, so my question is that some recommendations for how the code could look like, since they would not work, doesn't have necessary libraries, to make it work, we would like to change the code, so it plays the chords, the notes, musical notes, as we scroll through the EEG data. To modify your application, so it plays musical notes or chords as you scroll through the EEG data, you need to integrate audio synthesis and playback into your existing JavaScript code. This can be done predominantly on the front-end using the Web Audio API, which is suitable for real-time audio processing in web applications. Why is this conceptual stuff not actual code? First, set up a function to play individual notes using the Web Audio API. Create a function to map the power in each EEG frequency band to a musical note. This function should take EEG data as input and return frequencies, notes, to be played. Modify the existing scroll event listener to trigger the note playing function based on the current EEG data. Let's go from the bottom, modifying this event listener, assuming the patch and render date in the second bit here. Patch and render should be accessible after the file scroll updated this function. In JavaScript, the order is important. So we're using play frequency here, play frequency there, and we can issue with the WebKit audio contextual sorted in a sec. And we have the map EEG data, and the event scroller. You need to fine-tune map EEG data to notes to create an enjoyable and meaningful auditory experience. The mapping from EEG data to musical notes can be complex and may require experimentation to get right. Performance, ensure the sound synthesis doesn't impact the app's responsiveness. Test across different devices. User control, provide controls to enable, disable sound, adjust volume, or change the mapping logic. Musicality, adjust the mapping to ensure the resulting sound is harmonious and reflects the EEG data meaningfully. Fine-tuning the map EEG data to notes function requires a thoughtful approach to effectively translate EEG data into musical notes. Here's a more detailed strategy to refine this mapping. Not writing the code. Firstly, decide the musical scale and range you want to use. For simplicity, you might start with a C major scale spanning one or two octaves. You can later expand to more complex scales or a wider range. By this writing it. EEG data can vary significantly in amplitude. Normalize these values to a consistent range that maps well to your musical scale. For example, you could map EEG power values to a range of 0 to 1. Divide your chosen musical scale into segments corresponding to the EEG frequency bands. Delta, Theta, Alpha, Beta. For instance, for each EEG band, map the normalized power value to a specific note in its range. Higher power could map to higher notes within the band's range. Use the power values not just to determine pitch, but also dynamics, loudness. A higher power might result in a louder note. Question when we normalize. It's normalizing within the certain window, so the time window there is important. Use the power values not just to determine pitch, but also dynamics, loudness. A higher power might result in a louder note. If playing chords, multiple notes simultaneously, ensure the combination of notes from different bands harmonizes well. This might involve selective note choices or octave adjustments. We're replacing all this map data to notes. Musical scales experiment with different scales, major, minor, pentatonic, etc. to find what best represents your data. Timing, consider how the duration of each note relates to the time scale of EEG data. User feedback, iteratively refine the mapping based on user feedback, especially from those familiar with EEG data interpretation. We're having errors. Remember, this process involves both technical implementation and creative choices. The goal is to create an auditory representation of the EEG data that is both informative and aesthetically pleasing. That's fine. We have in show script airline 142. Currently, it replaced this with the data value. I'm feeling this should be happening elsewhere. Can GPT or help with this? Based on the code snippets and context provided earlier, it looks like current EEG data is intended to hold the EEG data at the current position of the file scroller. To resolve this issue, you need to ensure the current EEG data is correctly assigned the EEG data before it's used in the function attached to the file scroller event listener. Declare current EEG data as a global variable at the top of your JavaScript file. This makes it accessible throughout the file. Ensure the current EEG data is updated with the latest EEG data whenever new data is fetched. This should be done within the fetch data and render function, or equivalent, which fetches and processes the EEG data. The current EEG data is now properly defined and updated. You can use it within your event listener. Let's look at how this was done before. Transreturning the data. How do I do that? Now the current EEG data is properly defined and updated. You can use it within your event listener. Error handling. Add checks to ensure current EEG data is not null or undefined before using it. Data synchronization. Make sure current EEG data is synchronized with the current position of the scroller and is updated every time new data is fetched. Debugging. Use console.log statements or browser developer tools to track the value of current EEG data and ensure it's being updated correctly. This approach should resolve the reference error by ensuring current EEG data is defined and accessible when needed. We haven't sorted this yet. So the function is this time second. Start time. This is fetching that the fetcher rail. Okay, I'm confused. Which data in render probably should split this function. We have to modify this. To modify the fetch data and render function to include playing musical notes based on the EEG data, you need to update the function to process the data for musical note generation and then trigger the note playing mechanism. This involves a few additional steps after fetching and processing the data. You need a function that takes EEG data and returns the corresponding musical notes. This function, map EEG data to notes, should be defined based on how you choose to map EEG frequency bands to musical notes. Now we have that. After plotting the EEG data, use the map EEG data to notes function to get the notes that correspond to the current data segment and then play them. This one is the actual function. Existing code fetched plot data. Then when we have fetch API URL, response json data. That's the code for plotting the signal. We have a constant of notes to play. Okay, we'll be able to integrate the whole generator, the whole function. We'll need to integrate. Ensure that fetch data and render is called when the user scrolls through the file, which will automatically trigger the note playing mechanism. Right, so we don't need all that, do we? Still generating... Performance, test the application to ensure that playing notes does not adversely affect performance, especially when scrolling rapidly through the data. Mapping logic, fine-tune the map EEG data to notes function to create a meaningful auditory representation of the EEG data. This might require some experimentation. User control, provide options for users to enable, disable the sound or adjust parameters like volume or the mapping strategy. To create the fetch data and render function that includes playing musical notes based on EEG data as you scroll, you need to integrate several components, fetching the data, plotting it using plotly, and then converting specific EEG data points into musical notes. Here's a complete function based on the context and requirements you've provided. Okay, so we already have play, frequency and map EEG data to notes. Now with function, keep adding more and more functions. Extract each data from music. And the main one is fetch data and render, which we are replacing. The audio context was not allowed to start. And another function is not defined. Not 69. Note range. We have note range. Implicate a function. Right, this is actually similar to what we had in the previous code. I have a feeling we need to change the back end as well. We have power to note. Ensure that notes array and get note range contains all the notes you plan to use. If you're using a simplified scale or a different notation, like flats instead of sharps, adjust the array accordingly. This implementation assumes a linear traversal through the notes. If you have a different mapping in mind, like using a specific scale or skipping certain notes, you need to modify the logic. The implementation of get note range here is a basic version. Depending on your musical requirements, you might want to refine this further. It's just the warning. It's not an actual error. And that's fine. It's the info. Audio context was not allowed to start. I said why we can't hear anything. The JavaScript 129 in a webkit may not exist. The error message you're encountering, the audio context was not allowed to start. It must be resumed while created after a user gesture on the page is a security feature implemented in modern web browsers. It prevents audio from automatically playing without user interaction to avoid unexpected sounds when a web page loads. To resolve this issue, you should modify your code to create or resume the audio context in response to a user gesture, such as a button click. Here's how you can implement this. I should be able to not have an additional button. Audio context initialization. The audio context is now initialized only after the user clicks the enable audio button. This is in line with the browser's requirement for user initiated action. Button removal, optionally, you can remove the button after the audio is enabled to clean up the UE. Safety check, the play frequency function includes a check to ensure the audio context is initialized. If not, it logs a message and does not attempt to play audio. Additional note, user experience, it's important to clearly communicate to the user why they need to click the button to enable audio. This helps in providing a smooth user experience. With these changes, your application should comply with the browser's autoplay policy and the audio context will be allowed to start after the user's explicit action. Okay, I actually don't want the button. I'm just getting this error, which doesn't seem to be related to the button. I want an extra button. The TypeScript error you're encountering is related to TypeScript's type checking. The WebKit audio context is a prefix version of audio context used in older versions of certain browsers, like Safari. However, TypeScript's type definitions for the window object might not include WebKit audio context, as it's a non-standard feature. You can assert the type of window daily to bypass the type checking for WebKit audio context. This is a quick solution, but bypasses the benefits of TypeScript's type safety. A more type safe approach is to check if WebKit audio context exists on the window object before using it. This way, you won't bypass TypeScript's type checking. So when does this go? So it's in a TypeScript file. I recommend the second approach. It maintains type safety and ensures compatibility with browsers that may still use the prefix version. The first approach is quicker but less safe in terms of type checking. The third approach is more suitable for larger projects where you frequently need to deal with non-standard features and want to maintain type safety. We don't particularly care about TypeScript. That error is okay, but the browser is not playing anything at the moment. We also do not want to add an additional button. It is not needed. Can we make sure the sound is played when the scroller is moved? If you want to ensure that sound plays when the scroller is moved, without adding an additional button for initializing the audio context, you can create or resume the audio context as part of the scroller's event listener. However, due to modern web browsers' auto-play policies, audio might not play until there's some form of user interaction with the page. The scroll is an interaction with the page. Here's a revised approach that attempts to initialize audio context on the first user interaction, i.e., the first scroll event, and then plays the sound accordingly. Play sound on scroll event, the sound should play when the scroller is moved. Yes, that's right. User interaction, modern browsers may still restrict audio playback until there has been explicit interaction with the page. Scrolling is generally considered a passive interaction that might not be enough to enable audio playback on all browsers. A more reliable approach is to have some form of active interaction, like a button click or a toggle switch. Page load experience, automatically playing audio on page interaction might be unexpected for users. It's often a good practice to inform users that audio will play and provide them with control over it. Testing across browsers, make sure to test this implementation across different browsers to ensure consistent behavior as auto-play policies can vary. This approach tries to circumvent the limitation by initializing and resuming the audio context during the first scroll event, but keep in mind that browser restrictions on auto-play might still apply. At this stage, I'm happy to take the risk of annoying the user. We are having trouble with this one. Anyway, so we have audio context, initialize false, initialize audio context, then play, frequency. Now, problem is that I can't get rid of that. Why frequency? Is this the whole function? Yeah, let's use that. That should initialize the audio context in this scroll event, fetch and render. Yeah, we don't need all that. It keeps adding more functions. It's ridiculous. We already have that. User interaction, modern browsers may still restrict audio playback until there has been explicit interaction with the page. Scrolling is generally considered a passive interaction that might not be enough to enable audio playback on all browsers. A more reliable approach is to have some form of active interaction, like a button click or a toggle switch. Page load experience, automatically playing audio on page interaction might be unexpected for users. It's often a good practice to inform users that audio will play and provide them with control over it. Testing across browsers, make sure to test this implementation across different browsers to ensure consistent behavior as auto-play policies can vary. Play policies, preventing us from playing in the browser. Every time I ask for something, it's generating a new function, which is not great. We'll try fixing this next time. See you in a bit.