 Too soft isn't it? Yeah, this is better. Second, just adjust the... Yeah, we have the music thing going on as well. We keep talking just so I can test this sound. Do we have the music coming through? Let's see... Right, that should be coming through. The song of music and the boat talking. Right. Ah, it's the OBS settings. Someone suggested that I should stream on another platform, Twitch or whatever. I never used it before, but I actually did try to stream yesterday on it. The boat still talking, I have to turn it off. So I need the link for the stream. Should we go X as well? Yeah, it's too many platforms to manage. And we may need to leave and have this post to work. Just check. So I muted this tab to show you my OBS for a second. We haven't started yet? No, it is muted, okay. Because I have the messages coming through to OBS as well. Just to kick things off. Let's start from the gesture, what's called gesture groove. So it's the one you see on the bottom of the screen as well. It needs some improvements. We're getting all sorts of... Well, not the errors, but the warnings, but a lot actually. Is it all the same one page? Yeah, there's a lot of them. They must be all different because otherwise it will be just incrementing this number. This number here. I just hope them into GPT, but then we have other things we wanted to do. This one is only HTML and JavaScript. There is no backend. And actually it might be a good opportunity to say that we will try from now on, have all the tools on barnicales.com, front and only. So because it's a free site, you will be paying the electricity bill for using the tools. Then there is another site that I developed, bionicloud.com that will have the flash applications. And that will be some sort of login required. And let me know if you wouldn't like to try it on. It essentially will be a custom version of these tools that you have on barnicales.com. But they will have a backend. And the whole backend is locally run. So you will be using my server, my cloud. That's why it's called bionicloud. That's that. Oops. The error is in. Yeah, we don't want to overload the GPT-4 because it will give us a timeout. I wish there was enough open AI guys and girls. And they are working on it or not. It seems you already have a mouse down event listener for this purpose. Make sure this event listener is functioning correctly. Sometimes, placing it inside a function like DOM content loaded can delay its registration. It should be readily available as soon as the page is interactive. Check for suspended state. Before playing any audio, check if the audio context state is suspended. If it is, show a user prompt to interact with the page to enable audio. Log to ensure your event listener is firing in tone. Start is being called as expected. Fallback or instructions provide clear instructions or feedback to the user if the audio context is not started. A simple button with a label like enable audio can be an effective way to prompt user interaction. Testing across browsers test this behavior across different browsers and devices as they might handle audio playback policies slightly differently. Implement these changes and test again. This should resolve the issue with the audio context not starting. Okay, but we are live streaming on YouTube at the moment. There is no one in this stream. Hopefully someone joins in. But as we develop this tool live, I shared the code that we have so far. Also, so yeah, we would like to fix any errors. But the main thing we would like to improve the functionality of this application. We would like to add a feature that can check how many figures are shown on the screen using the camera. And based on the number of fingers play multiple notes simultaneously. So essentially, potentially have different range of notes for each finger or something like that. Ideally, so the idea is that we're playing chords. Ideally, the chords will be legit chords. So will there will be the notes will be matched well together. Yep. Yep. Yep. Got it. You're live streaming the development process and want to enhance the application to recognize the number of fingers shown and play chords accordingly. Here's a breakdown to implement this. Finger counting media pipe hands can detect and track hand landmarks, including fingers. Use this to count the number of fingers extended. Each finger has specific landmarks. By analyzing the distance between these, you can determine if a finger is extended. Chord mapping, the finer set of chords or notes that correspond to the number of fingers extended. For instance, if one finger is extended, play a specific note, and if two fingers are extended, play a chord that combines two notes. Playing chords with tone. JS, you can play multiple notes at once to form chords. Use tone, polysynth for polyphonic synthesis, which allows multiple notes to be played simultaneously. Dynamic chord generation, you might want a system that dynamically generates chords based on the number of fingers. This could be a predefined set of chords or an algorithm that constructs chords in real time. Visual feedback, provide visual feedback on the screen showing the number of fingers detected and the corresponding chord being played. This enhances user experience, especially for viewers of your live stream. Optimization and testing, test this feature thoroughly, as real time finger detection and audio generation can be resource intensive. Optimize for performance. User interaction, since you're live streaming, engage your audience by explaining what you're doing, the challenges, and encourage feedback or suggestions. Here's a simple pseudo code snippet to illustrate the concept. This enhancement should make your application more interactive and engaging, especially for live stream viewers. Good luck with your development and live stream. We do like your suggestions. So finger counting, if media pipe is saying we can detect and track the landmarks including fingers, and use this to count the number of fingers being extended. So that will be good. That is, we'll play a different number of notes depending on how many extended fingers there are. Now regards the chord mapping. So can we make sure, depending on how many fingers are extended, the same number of notes is played. So if two fingers are extended, we'll play two notes. If three fingers, we'll play three notes. The mapping should correspond to some sort of notes that go well together, that are matched. Yes, we need to make sure we can play multiple notes at once. That will help. Does it mean we need to change the JavaScript from using tone.js at the moment? And yes, we would like to provide visual feedback on screen. Well, actually we would like to keep the screen as clean as possible. We don't want any rubbish on the screen. We would like for it to be very user intuitive. But yes, eventually we'll also need to write a set of instructions to go on the bottom of the HTML page. Yep. Great, focusing on a clean UI and intuitive interaction is key for user engagement, especially during a live stream. Let's refine the approach. Finger counting with media pipe, use media pipe to track hand landmarks and count extended fingers. This provides the input for your musical mapping. Chord mapping based on finger count, create a mapping where the number of extended fingers corresponds to the number of notes played. Ensure the notes harmonize well together. You could use major or minor scales, or even pre-defined chord progressions. Playing multiple notes, with tone, js, use tone, polysynth for playing multiple notes simultaneously. This will allow you to play chords based on the number of extended fingers. Minimalist UI, since you prefer a clean screen, limit on screen elements. Only display essential information, like a simple indicator of the number of fingers detected or the chord being played, if at all. Instructions, add a concise set of instructions at the bottom of your HTML page. This guides users without cluttering the interface. Code adaptation, modify your JavaScript to integrate these features. Here's a basic outline. Performance and testing, ensure the application performs well, particularly in real-time processing for live streaming. This approach should enhance the functionality of your application while keeping the UI clean and user-friendly. Remember, live streaming your development process also offers a unique opportunity for real-time feedback and engagement with your audience. So, are we going to be using GitHub Copilot, or are you going to be generating the code without any placeholders? I'll be providing the code directly without using GitHub Copilot. Here's a more detailed version of the JavaScript code for your application, integrating the functionality of playing chords based on the number of extended fingers detected by media pipe. In this code. You need to fill in the logic to count extended fingers using media pipe landmarks. Play chord, this function maps the number of extended fingers to specific chords. The chord map can be modified to match your desired musical output. Integration with media pipe hands, make sure to configure media pipe hands properly and call the count extended fingers and play chord functions with the detected hand landmarks. Remember, the finger counting logic is crucial and requires careful implementation to accurately detect extended fingers. This code provides a framework, but you'll need to tailor it to your specific needs, especially the finger counting part. And how do I integrate it with the original chord that provided in the beginning of this chat? To integrate the new chord playing functionality with your original code, you'll need to combine the logic for gesture detection and audio generation. Here's how you can do it. Integrate finger counting logic, include the count extended fingers function and its integration with media pipe hands into your existing code. Combine with chord playing, utilize the play chord function within the media pipe hands results processing. Merge existing functionality, ensure that existing features, like individual note playing based on hand position, still work alongside the new chord functionality. Here's a simplified example of how you might combine these elements. In this combined setup, finger counting, you need to implement the logic in count extended fingers based on your requirements and the specifics of the media pipe landmarks. Existing functionality, ensure that the integration of the new code does not disrupt your existing gesture to note mappings and other functionalities. Testing and optimization, after integrating, test the application thoroughly to ensure that both the... Sure, to make the integration smoother, let's break it down into steps and focus on key points where the new code intersects with your existing code. The primary goal is to blend the finger counting and chord playing logic with your current gesture recognition and sound generation functionalities. First, include the count extended fingers function. This function will analyze the hand landmarks provided by media pipe and return the count of extended fingers. You need to implement the logic for this based on the landmarks. Add the play chord function, which maps the count of extended fingers to specific chords. This function uses PolySynth from Tone, JS to play these chords. In your existing media pipe hands results callback, integrate the call to count extended fingers and play chord. This should be done in a way that complements your existing logic for gesture based sound generation. Here's a simplified example showing how these parts could be integrated into your existing media pipe callback. After integrating the new features, test the application to ensure that the finger counting works accurately. Chords are played as expected based on finger count. Your existing gesture to note functionality is not disrupted. Debugging, use console.log statements to debug and understand the flow of data. Performance, pay attention to performance as real-time video processing and sound generation can be resource intensive. Use a feedback, since you prefer a minimal UI, consider subtle ways to provide feedback or indicate when chords are played. Feel free to share specific parts of your existing code if you need more tailored guidance on the integration. Remember, iterative testing and refinement are key to developing a robust application. To integrate the new chord playing feature with your existing code, you should focus on merging the on hand results function, which includes the new logic, with your current on results function used for processing media pipe hands results. Here's how you can do it. Combine hand results handling, integrate the logic of counting extended fingers and playing chords into the on results function, which is already processing hand landmarks. Ensure compatibility, make sure the new code works seamlessly with your existing logic for gesture recognition and note playing. Here's an example of how to merge the new functionality with your existing on results function. In this integrated setup, the count extended fingers function will analyze the hand landmarks to count how many fingers are extended. The play chord function will then play a chord based on the number of extended fingers. The rest of your existing logic in on results for individual hand gestures and note playing remains intact. Make sure to test this integrated code thoroughly to ensure both the existing functionalities and the new chord playing feature work correctly without conflicts. Remember, achieving seamless integration might require some adjustments and fine tuning. We're not doing pose, just doing face and hands for now. You could have some sort of dancing app. Let's also look at the whole body to generate music. But we won't be doing that at the moment. Yeah, get up, copilot needs a selection of the code. So essentially it's sending it to GPT. I won't be able to integrate it myself. Can you start regenerating the whole code with the necessary changes that we made in this chat? Certainly. I'll integrate the new chord playing functionality into your existing code, creating a unified script. This integrated script will count extended fingers using media pipe hands and play corresponding chords with tone. JS, while maintaining your original gesture recognition and sound generation features. Here's the complete integrated code. Finger counting, the count extended fingers function needs to be implemented to count the extended fingers accurately. Chord playing, the play chord function plays chords based on the number of extended fingers. Existing features, your existing gesture recognition and sound generation logic is preserved and works alongside the new chord functionality. After integrating this code, you'll need to test and refine it, especially the finger counting logic. This script provides a framework for the specific implementation details, particularly in count extended fingers, need to be tailored to your application's requirements. Right, so this in the beginning should be included in the HTML, not in the JavaScript. Place it there, clear, clear, clear. So it's different from this point. That's the whole thing, clear. Right, because we're already declaring it. Why are we importing this audience here as well? Continue integrating the whole code. Yep, yep. Sure, let's continue integrating the whole code. I'll include the parts of your original code that handle media pipes hand, face and pose tracking as well as the tone. JS synthesizer set up. Here's the complete integrated code. Yeah, why are we in face mesh instance? Why do we have the CDN file? Isn't that what's being already done in HTML? Yep, yep, yep, yep, yep, yep, yep, yep. I'm still generating, that's why. This integrated script combines your original functionality with the new chord playing feature. Make sure to fill in the logic for count extended fingers to count the extended fingers accurately. Also, test the application thoroughly after integration to ensure all features work seamlessly together. Continue integrating. Sure, let's continue integrating the code. I'll focus on combining the gesture-based sound generation logic with the new chord playing functionality. The goal is to ensure both features work harmoniously. I'll retain the parts where you process hand movements to generate musical notes and merge them with the chord playing logic. Here's how the integration looks. In this integrated script, the on-results function now includes both the chord playing feature and your existing logic for playing individual notes based on hand gestures. The script determines which hand is being used and plays either a chord or individual notes accordingly. The logic for handling hand movement, calculating the note to play, and triggering the synthesizers is preserved. Ensure to test this thoroughly as the integration introduces complexities in handling simultaneous input from hand gestures and chord playing. You might need to fine-tune the conditions under which chords versus individual notes are played based on user gestures. Yes, that's the whole on-results function, previous position is not being used. Okay, let's get up. Copilot taking this from... Actually, the only suggestion. We already have it. Yes. We know where it's taking it from. Not quite the same. But it's taking it from the code below, so that's a good way of doing it. Let's have a call in there. Okay, so we... It's good. Place results green. Function on results should be good to go. Also remove the GitHub, remove the placeholder from it by just looking at the code at the bottom of the same file. Pick it up now. Synth length. Okay, that's right to the destination. We have... Synth... Right. And now it's taking code from that already exists. I don't know why. Integration. Right, but now we have different scale left and right. Can you continue integrating while I'm testing what we currently have? Yep. Certainly. To ensure a comprehensive integration of the chord playing functionality with your existing gesture-based sound generation, thorough testing and debugging are necessary. I'll continue integrating the code and outline key areas for testing. Here's the continuation of the integration, focusing on the on-results function where both the chord and individual node functionalities converge. By methodically testing these aspects, you can identify and resolve any issues, ensuring that the integration of the new chord playing feature with your existing functionality is seamless and effective. Remember, iterative development and testing are key to refining complex interactive applications like this. How do we select the second camera? I have two USB cameras connected. We're already using the first. I would like to use the second one. Yep. To select the second USB camera in a web application, you will need to use the media devices. Get user media, API, which is part of the WebRTC specification. This API allows you to access media input devices like cameras and microphones. Here's how you can modify your existing code to choose the second camera. Enumerate devices. First, use navigator, media devices. Enumerate devices to get a list of all available video input devices, cameras. Select the second camera. Once you have the list, you can select the second camera, if available, by device ID. Update media stream. Use the selected camera's device ID to get a media stream and set it as the source for your video element. Here's an example of how you might modify your existing code to achieve this. The device ID of the second camera, if available, is used to request a media stream. Remember to call select second camera at an appropriate place in your code, such as after the DOM content is loaded or in response to user action. This function will attempt to switch the video feed to the second camera. Make sure you handle cases where the second camera might not be available or accessible. Yeah, I loaded the second camera from the HTML page itself, so from the browser. That's working okay. Would also like to change the code to be able to show the coordinates only without the actual camera footage. If you want to display the coordinates detected by media pipe, like hand landmarks, without showing the actual camera footage, you can modify your setup to draw the coordinates on a canvas while keeping the video element hidden. Here's how you can adjust your code. Hide the video element. You can hide the video element using CSS, either by setting display, none, or by positioning it off-screen. Draw coordinates on canvas. Use the canvas to draw the coordinates or any other visual representations based on the data received from media pipe. Here's an example modification to your existing code. In this setup, the video element is hidden, so it won't be displayed on the page. The onResults function is modified to draw on the canvas based on results from media pipe. It draws circles at each landmark and optionally displays their coordinates. You can further customize the canvas drawing based on your requirements, e.g. different colors or shapes for different landmarks. This approach allows you to utilize the full capabilities of media pipes tracking while customizing the visual output to your preference. It's just taking code from the bottom and starting to bring stuff from the beginning. Let's try github again. Yeah, github should be able to solve things like that. We have this function generated anywhere already. Yeah, githubcouple is just refusing to do anything. To implement the count extended fingers function, you'll need to analyze the hand landmarks provided by media pipe to determine which fingers are extended. This involves some understanding of the hand's anatomy as represented by the landmarks. Each finger has specific landmarks, and by analyzing the angles or distances between these landmarks, you can infer whether a finger is extended or not. Here's a basic example of how you might implement this function. In this implementation, for the thumb, a simple check based on the horizontal position of landmarks is used. This example assumes a left hand, for the right hand, the condition would be reversed. For other fingers, it checks if the y-coordinate of the fingertip is above the corresponding intermediate phalanche landmark. If it is, the finger is likely extended. This function returns the count of extended fingers. Note, this implementation is quite basic and may not be accurate in all scenarios. The actual determination of whether a finger is extended can be complex and may require more sophisticated checks, possibly considering the angles between landmarks. You might need to adjust the logic based on the hand orientation and other factors specific to your application's requirements. I already had, like, chord... I have this second camera, yes. A bunch of errors. What's that flashing? Control I... Place all this in the code. It's good that you are actually working now. Which function actually displaying the coordinates on the screen? Overlaying them onto the footage, the video footage. Yep. In your existing setup, displaying the coordinates overlaying the video footage is typically handled in the function that processes the results from media pipes hand, face, or post-tracking. This function is often named something like on results, where you draw on a canvas that overlays the video footage. Here's an example of how you might modify this function to display coordinates. In this modified on results function, the video frame is drawn onto the canvas. Each hand landmark is drawn as a blue circle. The coordinates of each landmark are displayed next to the landmark. This setup will overlay the coordinates on the video footage, providing a visual representation of the hand landmarks and their positions. Remember to adjust the styling, like colors, text size, etc., as per your UI design preferences. We're using the second camera. It's not working well. It's not displaying the landmarks. Do you know what's up? This is odd canvas. We have the canvas. We have the image. Okay, we don't have that code. Cameras begin buff. Okay, this is odd. This function. Why do we have on face result? This function should be inside on result. Do we have still scale? Yes, we do. Yes, something wrong. Clear. Clear. Clear, clear, clear, clear. Clear, clear, clear, clear. Your code integrates media pipe for gesture recognition and tone, JS for sound generation, and it's set up well. The on-results function processes the hand landmarks, and you've correctly added the count extended fingers function for detecting the number of extended fingers. To ensure the code functions is expected, consider the following. Testing count extended fingers. Ensure this function accurately counts the number of extended fingers. This function's logic might need refinement based on... Integrating play cord functionality. The play cord function is triggered within the on-results function, which should work as intended. However, test to ensure the cord is correctly based on the number of extended fingers. Gesture-based sound generation logic. Check the logic within the results for generating sound based on gestures. Ensure this doesn't conflict with the cord playing functionality. Visual feedback on cameras. Just to show the coordinates of hand landmarks, overlaying the video footage, make sure the camera is drawing logic within on-results as function is expected. Performance. Given that your application is processing video... Can we... There are a couple of things. First of all, the sound is way too loud. Second, I don't see the face and hand coordinates overlaid on the screen. Third, we would like an option to remove the original footage and just display the coordinates alone. To address the issues you've mentioned, let's go through each one and make the necessary adjustments to your code. To control the volume of the sound generated by tone, JS, you can set the volume property for each synthesizer. Here's how you can do it. Ensure you have logic in your on-results functions for both hands and face that draws the coordinates on the canvas. Here's an example for hand coordinates. To have an option to display only the coordinates without the video feed, you can control the drawing of the video frame on the canvas. Here's how you can add a toggle for this. In this setup, the show video feed flag controls whether the video frame is drawn on the canvas. The toggle video feed function switches this flag, which can be connected to a button or another UI element for user control. After making these changes, test your application thoroughly. Check the volume level and adjust it as needed to ensure it's appropriate. Verify that the coordinates are being drawn on the canvas as expected. Test the toggle feature to switch between showing and hiding the video feed. These adjustments should help address the issues you're experiencing and improve the functionality of your application. Ensure you have logic in your on-results functions for both hands and face that draws the coordinates on the canvas. Here's an example for hand coordinates. That's the on-results function. A couple of things. We would like to display the face coordinates as well. We would like the coordinates to be there all the time. We don't need the text. Overlight just the markers of where the fingers are. To display face coordinates along with hand coordinates and modify the visual representation to symbol markers without text, you can adjust the on-results function. This function should draw markers for both hand and face landmarks detected by media pipe. Here's how you can modify the function. Hand landmarks. The function draw landmarks is called for each set of hand landmarks, drawing small blue circles at each landmark. Here's how you can modify the function. Hand landmarks. The function draw landmarks is called for each set of hand landmarks, drawing small blue circles at each landmark position. Face landmarks. Similarly, the same function is called for each set of face landmarks, drawing small green circles. Draw landmarks function. This helper function takes the landmarks and the color as parameters and draws a circle at each landmark position. This setup will display both hand and face landmarks as colored markers on the canvas, overlaying the video feed. The markers will move in real time, tracking the detected landmarks without any additional text or overlay. After implementing these changes, test your application to ensure hand and face landmarks are correctly represented on the canvas. The markers accurately follow the movements of hands and face in real time. There is no unnecessary text or other visual clutter, only the colored markers. This approach should provide a clear and minimalistic visual representation of the tracked landmarks, enhancing the user experience. There is no face landmarks and there is no sound playing. Can you update the whole function? Yep. To ensure the both face landmarks are displayed and sound is played correctly, let's go through the entire results function and ensure it's setup properly. Here's a revised version of the function, addressing both displaying face landmarks and playing sound. Face landmarks, the function now includes logic to draw face landmarks using draw landmarks. Ensure that face mesh instance, on results, on results, is correctly set up in your media pipe face mesh instance. Sound playback, the play chord function is called within the hand landmarks section, which should play a chord based on the number of extended fingers. Make sure the chord map in play chord is correctly defined with your desired chords. Finger counting, ensure the count extended fingers function has the correct logic to count the extended fingers. Verify the face landmarks are displayed as green dots on the canvas. Test the hand landmarks and ensure chords play as expected when fingers are extended. Ensure the count extended fingers and play chord functions are correctly implemented and integrated. This comprehensive approach should address the issues of displaying face landmarks and sound playback. Remember to thoroughly test each part of the functionality to ensure everything works as expected. Oops, sorry for that. The volume, the volume, break it up, compile it for a second, see what it says. So it picked up some stuff from the commented out code Yeah, on face results are not being used. Is this function currently being used? Based on the code you provided, the on face results function is defined, but it's not clear if it's actively being used or connected to the media pipe face mesh instance. To use this function, you need to ensure that it's set as the callback for face mesh instances results. Here's how you can connect the on face results function to your face mesh instance. In this setup, the on face results function will be called each time the face mesh instance processes a frame and has results to report. This function should handle the face mesh results, such as drawing face landmarks on the canvas. On face results is properly defined in your code and contains the logic to process and display face landmarks. The face mesh instance is correctly configured and connected to a video source. The on results function for the hands instance and the on face results function for the face mesh instance can work together without conflicts, especially if they draw on the same canvas. If your application also involves processing pose data, a similar approach should be taken to connect the results of the pose instance to an appropriate function for processing and visualization. Right, so we have the face. Okay, also how to make sure the volume is not too high. At the moment we're setting the volume once somewhere, but then we have the tone synth in more than one place. To ensure consistent volume control across all instances of tone, synth, you can set the volume at the point of creation for each synthesizer. In tone, JS, the volume can be set in the synthesizer's constructor options. This way, you can maintain a uniform volume level for all your synthesizer instances. Let's modify your synthesizer instances to include a volume setting. In this setup, standard volume is set to minus 12 decibels, but you can adjust this value to suit your needs. A lower value means a lower volume. Each synthesizer, synth left, synth right, and polysynth, dynamic volume adjustment. If you need to adjust the volume dynamically, e.g., in response to user input, you can modify the volume property of each synthesizer instance whenever you want to change the volume of the function just on the new volume. Synth left, volume, value equals new volume. Synth right, volume, value equals new volume. Polysynth, volume, value equals new volume. Closing curly bracket. Initial volume, ensure that the volume is set before playing any sound to prevent unexpectedly loud output. Testing, after implementing these changes, thoroughly test the volume levels to ensure they're appropriate across all usage scenarios in your application. Volume is just called volume. It's not a globalism. I have it showing the camera first and then. Okay, we have the previous thing working as well, don't we? Just, can we mute this page? Okay, maybe. Lots of trouble because I have this page twice. Okay, that's back. That's the new one. You can actually mute. Yes, sorry. This volume is not controlled by... Volume is not controlled by the... Okay, it's working now, which is great. There are a few things missing from the original code. For example, the loudness used to be controlled by the Y-axis position of the fingers on the screen. It's not doing that anymore, it seems. Or maybe it does. I don't know. Can we check? Yep. To reintroduce the feature where the loudness, volume, or the notes is controlled by the Y-axis position of the fingers, you need to modify the part of your on-results function where you handle gesture-based sound generation. Specifically, you'll use the Y-coordinate of the hand landmarks to dynamically adjust the volume of the synthesizers. Let's revise the on-results function to include this functionality. In this modification, the Y-coordinate of the hand landmarks is used to calculate the volume. The volume is set inversely proportional to the Y-coordinate, and the higher screen position results in lower volume. The volume value is clamped between 0 and 1 for safety. The calculated volume is then converted to decibels and applied to the synthesizer corresponding to the hand being processed. Test different hand positions. Move your hands up and down in the camera's field of view and observe the change in volume. The volume should increase as your hand moves lower on the screen and decrease as it moves higher. Ensure smooth volume transitions. The volume changes should be smooth without abrupt jumps. Check for conflicts. Make sure this dynamic volume control doesn't conflict with other functionalities, especially if multiple features are dependent on the hand landmarks. By reintegrating this feature, you'll regain the dynamic and interactive aspect of your application where the user's hand position directly influences the sound output. Not too early when we need this. A separate window, do we? I will be finishing up soon. Anyone coming knows? We also have the following error in the console in the browser. Can we check how many notes do we have? What's the maximum number of fingers that can be played at once? One, two, one, two. The error message, max polyphony exceeded. Note dropped indicates that the number of notes you're trying to play simultaneously exceeds the maximum polyphony limit of your tone. Polysynth instance. Polyphony refers to the number of notes a synthesizer can play at once. When this limit is exceeded, additional notes are dropped or not played. To resolve this, you need to set an appropriate polyphony limit when initializing your tone, polysynth. This limit should match the maximum number of notes you expect to play simultaneously. If you want to match the polyphony limit to the maximum number of fingers detected, first decide what their maximum is. For example, if you're only tracking one hand, the maximum would be five, for five fingers. If you're tracking both hands, it would be ten. Here's how you can set up tone. Polysynth with a specific polyphony limit. In this example, max polyphony is set to ten, allowing up to ten notes to be played simultaneously. Adjust this number based on your application's requirements. Test with maximum finger count. Try playing chords or notes with the maximum number of fingers extended to ensure no notes are dropped. Monitor performance. More polyphony means more processing power. All your application's performance remains smooth with the increased polyphony limit. Adjust as needed. If you still encounter the max polyphony exceeded error, you might need to further increase the polyphony limit or reconsider the logic that triggers note playing to avoid exceeding the limit. The y-coordinate of the hand landmarks is used to calculate the volume. The volume is set inversely proportional to the y-coordinate. Higher screen position results in lower volume. The calculated volume is then converted to decibels and applied to the synthesizer corresponding to the hand being processed. The error message, max polyphony exceeded. Note dropped indicates that the number of notes you're trying to play simultaneously exceeds the maximum polyphony limit of your tone. Polysynth instance. Polyphony refers to the number of notes a synthesizer can play at once. When this limit is exceeded, additional notes are dropped or not played. To resolve this, you need to set an appropriate polyphony limit when initializing your tone. Polysynth. This limit should match the maximum number of notes you expect to play simultaneously. Setting the polyphony limit. If you want to match the polyphony limit to the maximum number of fingers detected, first decide what that maximum is. For example, if you're only tracking one hand, the maximum would be 5 for 5 fingers. If you're tracking both hands, it would be 10. Here's how you can set up tone. Polysynth with a specific polyphony limit. JavaScript. Copy code const max polyphony equals 10. Set this based on the maximum number of notes you expect to play simultaneously. Const polysynth equals new tone. Polysynth, max polyphony, tone. Synth, the destination. In this example, max polyphony is set to 10, allowing up to 10 notes to be played simultaneously. Adjust this number based on your application's requirements. Testing and adjustments. Test with maximum finger count. Try playing chords or notes with the maximum number of fingers extended to ensure no notes are dropped. Monitor performance. More polyphony means more processing power. Ensure your application's performance remains smooth with the increased polyphony limit. Adjust as needed. If you still encounter the max polyphony exceeded error, you might need to further increase the polyphony limit or reconsider the logic that triggers note playing to avoid exceeding the limit. Is this okay? Yep, it's almost correct. Your initialization of tone. Polysynth is almost correct, but there's a small adjustment needed in the syntax. When creating a polysynth in tone, JS, the number of voices, polyphony, and the synthesizer type should be passed as separate arguments, not as properties of an options object. Also, make sure the volume level and max polyphony are defined before they're used. Here's the corrected version. In this setup. Max polyphony is set to 10, meaning the polysynth can play up to 10 notes at the same time. Volume level is set to minus 12 decibels, but you can adjust this value to suit your needs. First, polysynth is initialized with max polyphony in tone, synth. Then, the volume is set separately using polysynth, volume, value. This configuration should correctly initialize your polysynth with the desired volume and polyphony settings. Make sure to test the synthesizer to ensure it behaves as expected, particularly in terms of volume and the number of simultaneous notes. What is here? Keep coming. The error message indicates that the way we're trying to set polyphony for tone, polysynth is outdated in the version of tone, JS you're using. In recent versions of tone, JS, polyphony is not set as the first argument in the constructor. Instead, it should be specified within the options object. Let's correct the initialization of tone, polysynth. In this setup. The polysynth is initialized with tone, synth is the voice type. The volume is set within the options object. The max polyphony parameter is no longer used as it seems to be deprecated in the version of tone, JS you're working with. If you still need to control the maximum number of simultaneous voices, polyphony, you would typically manage this in your application logic rather than configuring it directly on the polysynth. For example, you could limit the number of notes being triggered based on your application's specific requirements. Test this revised setup to ensure it meets your needs and the synthesizer behaves as expected. Let's still get the following error in the console. The error, max polyphony exceeded. Note dropped, suggests that your application is trying to play more notes simultaneously than the tone polysynth can handle. In recent versions of tone, JS, the polyphony limit, the maximum number of simultaneous voices is managed internally and might not be directly configurable as it was in older versions. Here are a few steps you can take to address this issue. Ensure that your application isn't triggering more notes than necessary. This often involves reviewing the logic that triggers notes, especially in response to gesture or landmark data from media pipe. Make sure that notes are being released, trigger release, appropriately after they're triggered. Holding notes for too long or failing to release them can quickly consume available polyphony. If your application logic allows, limit the number of notes that can be played simultaneously. For example, if you're tracking hands, you might limit the notes to 10, one for each finger. If different aspects of your application use different sets of notes or sound types, consider using multiple tone, synth instances instead of a single polysynth. This can help distribute the note load. If you're using an older version of tone, JS, consider upgrading to the latest version, as newer versions might have different handling of polyphony. Conversely, if you're on the latest version and facing issues, you might explore whether an older version better suits your needs. Look at when the error occurs to understand the context. Is it when multiple fingers are detected, or during specific interactions? This can help pinpoint the part of your code that needs optimization. Use console.log or debugging tools to trace how many notes are being played at a time. This can help identify scenarios where the polyphony limit is exceeded. By carefully reviewing and possibly adjusting how and when notes are played in your application, you can mitigate the risk of exceeding the polyphony limit. This often involves a balance between the desired functionality and the limitations of the synthesis library. Yeah, we have all sorts of issues. I might have two separate versions of this tool. One that you can see. This one that I muted at the moment. You mute this one. You can mute side. That doesn't work anymore. Unmute side. Right, so that... Working differently. It has the volume control depending on how high you go. Has longer notes playing. And shorter notes depending on the distance between the thumb and the index finger. But the Ava tool. Ah, and the main thing. It also has the range. So obviously this one better. The other one, what it's trying to do is depending on how many fingers are extended. It will play multiple notes at once in chord formations or notes that actually match one another. But that one will need way more work. And we actually went to be doing something else today. I might do another stream later. And actually continue with the fuzzy logic stuff. That's the core business. So I'll see you in a bit. Finish with some.