 one is on the HTML and JavaScript. There is no backend. We would like to add a feature that can check how many figures are shown on the screen using the camera and based on the number of fingers play multiple nodes simultaneously. So essentially potentially have a different range of nodes for each finger or something like that. So the idea is that we're playing chords. The idea is that chords will be legit chords. So there will be the notes will be matched well together. So it's saying as a finger counting, if a media pipe is saying we can detect and track the hand landmarks including fingers and use this to count the number of fingers being extended. So that that will be good. So that is we'll play a different number of nodes depending of how many extended fingers there are. Now regards the chord mapping. So can we make sure depending on how many fingers are extended, the same number of notes is played. So if two fingers are extended, we'll play two notes. If three fingers we play three notes, the mapping should correspond to some sort of notes that go well together that are matched. Yes, we need to make sure we can play multiple nodes at once. That will help. Does it mean we need to change JavaScript from, are we using tone.js at the moment? And yes, we would like to provide visual feedback on screen. Well, actually we would like to keep the screen as clean as possible. We don't want any rubbish on the screen. We would like for it to be very user intuitive. So are we gonna be using GitHub co-pilot or are you gonna be generating the code without any placeholders? I'll be providing the code directly without using GitHub co-pilot. Here's a more detailed version of the JavaScript code for your application, integrating the functionality of playing chords based on the number of extended fingers detected by media pipe. Yeah, we're not doing pose, just doing face and hands for now. You're gonna have some sort of dancing, dancing app that's also looking at the whole body to generate music. But we won't be doing that at the moment. GitHub co-pilot needs a selection of the code. So essentially it's sending it to GPT. Yeah, why are we in face mesh instance? Why do we have the CDN file? Isn't that what's being already done in HTML? Yep, yep, yep, yep, yep, yep, yep, yep, yep, yep, yep. I'd still generating that's why. Chones, synth, right. Yeah, now it's taking code from, that already exists. I don't know why, right. But now we have different scale left to right. Can you continue integrating while I'm testing what the we currently have? We would I'd also like to change the code to be able to show the coordinates only without the actual camera footage. Let's try GitHub again. Yeah, GitHub should be able to solve things like that. Which function actually displaying the coordinates on the screen, overlaying them on to the footage, the video footage. Right, we have landmarks. By the way, obviously, yeah, so we're using the second camera. It's not working well. It's not displaying the landmarks. And this is odd canvas. We have the canvas. We have the image. Okay, we don't have that code. Cameras begin to buff this function. Why do we have on face result? This function should be inside on result. Do we have still scale? Yes, we do. There are a couple of things. First of all, the sound is way too loud. Second, I don't see the face and hand coordinates overlaid on the screen. Third, we would like an option to remove the original footage and just display the coordinates alone. That's the on results function. A couple of things. We would like to display the face coordinates as well. Yeah, on face results are not being used. Right, so we have the face. Also, how to make sure the volume is not too high. At the moment, we're setting the volume once somewhere. But then we have the tone synth in more than one place. Volume is just called volume. It's not a global, is it? I have showing the camera first and then. Okay, we have the previous thing working as well, don't we? I just... Can we mute this page? Okay, maybe. Lots of trouble because I have this page twice. Okay, that's back. That's the new one. I can actually mute this side. Volume is not controlled by... Volume is not controlled by the... Okay, it's working now, which is great. There are a few things missing from the original code. For example, the loudness used to be controlled by the Y-axis position of the fingers on the screen. It's not doing that anymore, it seems. Or maybe it does. I don't know. Can we check? I mean, notes. We also have the following error in the console in the browser. Can we check how many notes do we have? Like, what's the maximum number of fingers that can be played at once? Your initialization of tone. Polysynth is almost correct. It's almost correct. Yeah, we have all sorts of issues. I might have two separate versions of this tool. One that you can see. This one that I muted at the moment. Can mute this one. Can mute side. That doesn't work anymore. Unmute side. Right, so that... Working differently. It has the volume controlled depending on how high you go. It has longer notes playing. And shorter notes depending on the distance between the thumb and the index finger finger. But the other tool. Ah, and the main thing, it also has the range. So you can... So obviously this one better. The other one, what it's trying to do is, depending on how many fingers are extended, it will play multiple notes at once in chord formation. So notes that actually match one another. But that one will be way more work. So see you in a bit. I'll finish with some.