 Okay, let's have feedback on the website, look at it, go, have a look, try the different tools and let me know what you think. One of the more recent ones is this EEG2 music conversion, this is a seizure in there. We can actually hear what a seizure sounds like by scrolling through it, the auto volume probably should have been selected, so go study at Deakin, sponsored by, and it will be louder when there is a seizure in the middle of those hundred seconds, hundred seconds of data in there, and this is what seizure sounds on different gels, 10 second window of seizures, yeah on some channels is more annoying than others, but that's the way it is. You also have the gesture to music generator, it's a bit more annoying on one hand than the other, I'm wondering if I can quickly shoot again, go study at Deakin, I was wondering if I can swap the hands because I'm, well whichever hand I'm using, yeah this sounds to me more annoying, I don't know what it sounds like to you, and yeah it can do short notes, when you close the fingers like this and longer notes, it has two scales for each hand, that was a false detection, when my hand is behind the microphone or something else, it, yeah detecting it sometimes is the opposite hand, so does a note from the other hand, which is not ideal, I'm wondering if we quickly can, yeah I don't know, I really need a musician, if you play any musical instruments to let me know, I can update this to sound better, these sessions meant to be interactive, and if there is no human to talk to, I'll have to talk to the bot, to the soulless, heartless robot, that's the best we've got, unless there is a human, I can, well I can obviously use the chat, and also I can try even bringing you live, we did it a couple times before, had some mixed results hopefully now I'm better with setting it up, so we can give it another go, so what are we gonna be doing today, I'm glad you asked, let's see, so it's an entire project, we're actually trying to publish bits and pieces on the GitHub that didn't work really well, I have to look at it again, yeah don't like it's like very bureaucratic kind of thing to use, I just close, yeah we're doing some music, I was thinking using it actually, I'm like touching my face a lot when I speak, so I was actually thinking using it as a letting myself when I'm doing it, because I probably will either have to re-record that bit or stop doing it all together, but for now, yeah especially if we're doing the webcam based eye tracking, we'll need that camera, so we might go back to, it's not on the site yet, it's still in development and hopefully will be available for you to play with, I'll show you what it's currently looks like, just use my Ava camera for this dream and the second one for the eye tracking, so I'll open the Ava Visual Studio, just making it smaller or bigger, I don't know, let me know what you prefer, yeah we have a bunch of files in it, obviously a HTML, start popping it into a GPT, it can explain the code, I'm pretty sure OpenAI just decreased the window, the text was called the token, the number of tokens that you can put in, it's quite unfortunate, because we also have the limit of 40 messages per 3 hours, hopefully we would not, if we do everything right, we should not hit that limit, I was thinking if I, and yes, before you scream at me for using a GPT-4 like this for programming, we're actually not using it for programming, we do use the GitHub Copilot, however GPT-4 is much better at the general context, so essentially instead of me reading, reading these comments, we can quickly format this document, that's fine, and when we use it as a prompt, we have this as the beginning and end of main.js, it's working okay, well I'm just thinking if I should get rid of the commented out fields, like this one, to reduce my token size, yeah this is what we use for early debugging, we don't need this anymore, do we need to log prediction, yeah I know it's working okay now, so yeah we had the issue with the bounding box for the face, might as well run it, show you actually how it looks like, yeah when we run it via this, it's gonna be a Flask application, even though at the moment there's not much at the backend, it's not doing anything, yes we will pop this one in GPT straight away, and also why the window is funny, in a way that it's not scrolling automatically to the bottom of the text, that's a bit annoying, so I know we're really doing that, we're drawing eyes detecting pupil, yeah we don't need those console logs, yeah we have an indicator if the pupil was detected or not, show you how it works in sync, right yeah we replace that because yeah just the way the data is being returned from the face model, anyway that's what it looks like, so you can actually see the pupils are being detected okay, but we want to improve it a bit and show how the processing is done, so I'm gonna show exactly what was done to the image to achieve that, and by the way yeah feel free to interrupt with my providing feedback after checking the website, the main site, currently we are developing this, in addition to the webcam based site tracking, we're also looking at public biomedical datasets, there's some, well currently the website is actually those tools are using ieg.org data, it's pretty much accessible for anyone, a lot of the other data is synthetic, so the ECG is synthetic at the moment, but yes we might look up real data in one of those databases, currently just as a warm-up, let's do continue the eye tracking development, so we had the html file, obviously we're using the tensorflow js and blazeface model, this is instead of media pipe in Python, so we actually had the same code, another eye tracker running in Python using media pipe and the tensorflow as well, no something else I don't remember, but now we trying to just make it lighter and run it all in front-end JavaScript alone, you see the eye tracking especially on this eye is not great, and we asked the bot in the sec to explain how the detection is actually being made, there's no questions by any chaos, let's continue we haven't submitted the first prompt yet, might just do the html, the flask, we don't expect any changes to it for now, the html will eventually have a description at the bottom of the page, let's do the main JavaScript code, just removing redundant comments, I wonder if programmers will just stop commenting code considering you can just pop it into an LLM and it will explain it for you into GPTO whatever else you're using, now I don't have a CEO or someone talking in my ear, I'm just checking the sound if the stream is using the correct microphone, it seems to be the case, let's pop this main.js as well, yeah we have this line at the end of the prompt, this is the end of this file, and then we also, we are currently at main.js, yeah so we also have the stars.js, we'll worry about it later, drawing utilities, face detection, face model, the face detection is a bad name because it's also does the eye-tracking, we have the face model that's finding the face, main.js, obviously the video stream, yeah we don't expect any changes to that, the face model is working okay, what we want to focus, drawing utilities is fine, what we want to focus is this face detection, because that's does the eye-tracking, yeah the name is a bit misleading when you, they call it eye-tracking but it's actually pupil tracking isn't it, because you wanted to move the label to move with your pupils, English is funny that way isn't it pupil, okay so we want to do face detection as well, let's use landmarks, the right eye and landmarks one is the left eye, that's okay, doing 20% eye width is 20% of my face, is it actually based on some physiological studies, I don't know, might leave that comment, and the eye height is 10% of the face height and it should be bigger and smaller, I don't know if it's, yeah my face is all you know this oriented, but yeah, the detection is not great, well that's sort of one eye at a time or something, even though they shouldn't say, they should be the same, and we have a bunch of stuff in the console as well don't we, yeah, have a bunch of stuff running there, that window scaling is not great is it, that's okay, this then to main.js, we want to do face detection, yeah we don't need that, I know that's working okay, consider removing this in production, why would I remove this in production, I don't know, it's kind of worse, and it's the same isn't it, yeah like that one better, yeah let's start with that, see what it says, we've been talking about already, all right diving into your code, you're setting up a webcam eye tracking system using TensorFlow, js, blaze face and flask, your HTML and flask setup look fine for a basic implementation, let's break down the JavaScript part, focusing on the main js and face detection, js files which seem to be the core of your eye tracking logic, video stream initialization, you're starting the video stream and attaching it to canvas, this part seems correct, model loading and frame processing, after loading the blaze face model, you're processing each video frame to detect faces and eyes, this is a standard approach for real-time tracking, pupil detection, the detect and draw function is crucial, it uses predictions from blaze face to draw bounding boxes and landmarks and then focuses on the eyes for pupil detection, your approach of redrawing on the canvas for each frame is typical for real-time detection, eye region extraction, you're calculating the eye regions based on face landmarks, the percentages, 20% for width, 10% for height, arbitrary but should work as a starting point, you might need to adjust these for different face sizes or camera resolutions, pupil detection, your pupil detection method looks a grayscale values to find dark points, assuming the pupil is darker than the rest of the eye, this is a basic but effective method, however it might struggle in varying lighting conditions or with different eye colors, dynamic thresholding, the fixed threshold for pupil detection might not be effective in all lighting conditions, consider implementing adaptive thresholding, error handling, your error handling is good but ensure you also handle cases where the face or eyes are not detected, performance optimization, real-time processing can be CPU, GPU intensive, check the performance and optimize, possibly by reducing the resolution or frame rate, use a feedback, when pupils are not detected, consider providing visual feedback on the canvas, overall your implementation seems solid for a basic webcam eye tracking system, just remember real world testing is crucial to fine tune parameters like eye region size and threshold values, okay, how about I'll provide the rest of the code and eventually we want to display the whole detection pupil detection process on the front panel and any magic numbers to be controllable from the front panel as well, also mention the calibration procedure, again, we would like to add maybe another canvas that shows the eyes in it as separate to the face with the grayscale image to show how the pupil detection is being done exactly and also can you go over the code and list any magic numbers, we want eventually to add them as controllers on the front panel, yep, adding a separate canvas to focus on the eyes and providing user control over magic numbers are excellent ideas for enhancing your eye tracking system, let's break down these enhancements and identify the magic numbers in your existing code, separate canvas for eye tracking, implement, implement a new canvas element in your HTML to display only the eye regions, modify detect and draw in main JS to draw eye regions and pupils on this new canvas, this dedicated canvas can zoom in on the eyes, providing a clear review of the detection process, user controlled parameters, magic numbers, eye region size, currently set as 20% of face width and 10% of face height, provide sliders or input fields to adjust these percentages, pupil detection threshold, set at 100 for grayscale, this is crucial for different lighting conditions, a slider to adjust this threshold would be beneficial, pupil radius, in draw pupil the radius is set to 10% of the eye width, offering control over this value can improve accuracy for different eye sizes, implement a calibration process at the start, this can involve having the user look at different points on the screen to establish baseline measurements for eye and pupil positions, calibration data can then be used to fine tune the tracking algorithm, face detection, JS, eye, eye region size, eye width equals face size, width asterisk 0.2 and eye height equals face size, height asterisk 0.1, landmark point size, CTX, fill rect, 0.2.5, 0.1, 2.5, 0.5 pupil radius, const radius equals eye, width asterisk 0.1 canvas size, you might want to allow dynamic resizing of the canvas based on user preference or screen size, use HTML input elements like sliders or number inputs to control these parameters, bind these inputs to your JavaScript functions, updating the values in real time or upon a submit action, ensure robust error handling and input validation for user inputs. Incorporating these enhancements will not only make your system more versatile and accurate, but also more user friendly. Users can tailor the tracking to their specific needs, and the separate eye canvas will provide a clear visual representation of the detection process. Yeah, how do we do that? Let's do the list in below the page refresh. And why on earth it's not below the page? Resar style CSS. Put it next to the HTML for a sec. Yeah, the eye tracking results, this sounds good. Can detect blinks, can we? Still detecting the pupil, even when I close my eyes. Okay, this will need a lot of work. I mean, yeah, if I close my eyes this way, it says right pupil not detected, left pupil not detected. But when I just close my eyes, it's still detecting a pupil. Like what? Is it distance between the eyes? Hey, want to pop the rule description there? Yes, she want to move it here. No, why are you doing that? Dreaming at me, how do we fix that? That whole description should be below the whole description, the tool description, and the eye tracking results should be below the canvas. Can you fix it? Just generate the whole code. Looks like. Yeah, just need to make sure there's no GPT-4 normally. It puts what are they called the placeholders and things. We don't currently do proper version control. Just have to fix my Git and everything. It's a moment that still doesn't work. That's still the text. I want the problem is, get rid of that for a sec. See the thing about commenting out in CSS. That was the original thing that was just to overlay the one top of the hour. So it was replacing my eyes, putting them in a separate canvas. One of those, making sure that the canvas, yeah, does this funny business. Yeah, I want to also populate the eye tracking results. So there's counting blinks and things. Yeah, currently it's not working. I still like this thing if it was. Oh yeah, this thing is detecting another face when I do this on my head. So that's not great. Not ideal. Yes, if you have any suggestions as to how to improve this, do let me know. These silly comments work in CSS and now it should do all this overlaying business. Yeah, probably do another canvas that shows the grayed out stuff. And this is the CSS file. And I currently have a problem where the description of the tool is hidden behind the canvas and the video stream. Can we fix that? The problem is shown in the uploaded image. Yep. Let's work on ensuring that the descriptions and results of the eye tracking process appear below the canvas, as you've mentioned. First, it seems you have an overlapping issue with your CSS where the hashtag video and hashtag tracking canvas elements are positioned absolutely. This can cause other elements like your descriptions to be hidden behind these elements if not managed correctly. Let's fix that by adjusting the CSS to allocate space for the descriptions below the canvas and the video stream. We will also need to ensure that the body element doesn't have height, 100vh, as this can restrict the visible area to only the viewport height and prevent scrolling. Here's an updated CSS that should resolve the issue and make the descriptions appear below your canvas. This will ensure that the tool description will appear below the canvas and video elements. Adjust the margin top value as needed to accommodate the actual height of your video and canvas elements. Remember to apply the tool description class to your descriptive text in the HTML file. Your descriptions and any dynamic results of the eye tracking process should now be properly displayed below the canvas and video elements on your page. For the placeholders and version control, here's what you might consider. Placeholders, ensure that your HTML input elements for controlling the magic numbers have placeholders that guide the user on what to input. Version control, if you're not already using a system like Git, you should consider starting. It helps track changes, manage different versions of your code and resolve conflicts. Remember to test these changes and adjust the margin or other styles as needed to fit the content you're displaying. If the issue persists, please ensure that the elements have the correct IDs and classes as referenced in your JavaScript and CSS files. What do we have? The problem still persists. Here's the HTML and CSS. It takes the description and the output of the tracker behind the canvas. Yep. It looks like there's some confusion due to the absolute positioning of your hashtag video and hashtag tracking canvas elements. When you use absolute positioning, the elements are taken out of the normal document flow. This means they don't affect the position of other elements. Other content doesn't automatically move down to make space for them, which is likely why your description and results are being overlapped by the video in canvas. To fix this, you need to ensure that the rest of your content is pushed down enough to clear the absolutely positioned elements. You want to give your hashtag eye-tracking results and hashtag tool description a margin top that is at least as large as the height of the video and canvas elements combined, plus any additional space for aesthetics. Here's an updated CSS that should resolve the issue. In your HTML file, ensure that the IDs are correct and match the CSS. Make sure to adjust the margin top value in the tool description and hashtag eye-tracking results the correct amount to ensure that it clears the video in canvas. The exact value you need will depend on the total height of your video and canvas elements plus any additional space you want to give. Finally, test the updated code to make sure everything is displaying correctly. You might need to adjust the margin top value based on your specific needs or if you change the layout or size of your video in canvas elements. Why there is this gap a control C? Yeah, we could give that as an option to the user to switch it like that, but I also don't quite understand where is this gap coming from. Yeah, it's the margin top that appears twice. Yeah, one, two, just dynamically get arranged. Instead of the margin top at 520px, can we just have the components dynamically arrange themselves on the page? What's called auto or something? This would not work with it. Yeah, I would like to add an option for, yeah, so now the tool description is, I would like that on the left all the text. So it's either we have a button to split the two and we add an option on the HTML page to split the video and the canvas and by default have them overlaid one on top of the other. We still need to make sure the text is below the canvas and the video footage. Okay, we need to obviously change the JavaScript as well, but it's a I want the default to be overlaid. Yeah, we can change the default of that checkbox later. Do I need that as a separate function? I don't think so. Okay, it's going for the main function. Okay, that doesn't do anything, does it? Yeah, a couple of things, quite a few actually. Okay, we're kind of left the GPT-4 behind. We wanted to keep up as well. Okay, there are a couple of problems. The split view toggle doesn't actually do anything. The idea was that we will have the result, the canvas overlaid over the video footage and have a toggle switch to separate between them. The other problem is the eye tracking results are actually not displaying any values. I assume that's a placeholder. We do have the values in the console log, but we would like some of it to be displayed on the front page as well. Okay, let's tackle these issues one by one. Split view toggle, the intended behavior seems to be to switch between an overlay view, canvas on top of the video, and a side by side view. Since the toggle isn't doing anything, we'll need to adjust the JavaScript to correctly handle the view change. Displaying eye tracking results, you want to display the eye tracking results dynamically on the web page, not just in the console. We'll need to update the DOM with the results from the eye tracking process. The JavaScript event listener for the change event on the split view checkbox should adjust the CSS of the hashtag video and hashtag tracking canvas elements to switch between absolute and static positioning. Here's a revised version of the event listener to handle this. The JavaScript event listener for the change event on the split view checkbox should adjust the CSS of the hashtag video and hashtag tracking canvas elements to switch between absolute and static positioning. To make the split view toggle functional and display the eye tracking results dynamically on the front page, we need to address both the HTML structure and the JavaScript functionality. Let's start by ensuring that the split view toggle correctly changes the position of the canvas relative to the video. Split view toggle, the split view toggle input should change the positioning of the canvas from absolute, overlay, to static, separate. Here's the updated JavaScript snippet to handle the split view toggle. This code will toggle the positioning of the canvas and video based on the checkbox state. Dynamically displaying results, to display the eye tracking results dynamically, you should update the contents of the hashtag eye tracking results div based on the data received from the prediction model. Here's the updated JavaScript function for detect and draw that updates the eye tracking results on the web page. Make sure your HTML element IDs match the IDs you're using in the JavaScript. CSS update, to ensure that the eye tracking results and the tool description are visible, you should set the hashtag video container to relative positioning by default. This way, the hashtag eye tracking results and hashtag tool description will naturally flow below the hashtag video container. With these changes, the split view toggle will switch between overlay and separate views, and the eye tracking results will dynamically update on the web page. Remember to test the functionality to ensure it works as expected and to make any necessary adjustments to the layout and styles based on your specific design requirements. Yes, another split view toggle button does work, but it's not working correctly. Dynamically displaying results, to display the eye tracking results dynamically, you should update the contents of the hashtag eye tracking results div based on the data received from the prediction model. Here's the updated JavaScript function for detect and draw that updates the eye tracking results on the web page. Yeah, we already have that, don't we? Oh no, we actually... Okay, so we have if right pupil, we're drawing the pupil, and also inserting the text, a fixed point. Okay, right. So we have the go audience there, they actually seem to work. Yeah, if we have no face, it will just not change the values. Why is it called the video container? Position relative. CSS update to ensure that the eye tracking results and the tool description are visible. You should set the hashtag video container to relative positioning by default. Yeah, the video container is just called video. So by default, it's not a split view. It's moving up the description. It meant to be moving the canvas. Yeah, the eye blink will need a separate function for it. It's not working in the moment. Why is it only true false to the... Yeah, we could count count the number of links. But yeah, we don't have that functionality. Can we click console for a sec? Yeah, maybe we stop. Yeah, the console, spell console, and get rid of them, because we actually troubleshooting something else now. Still have the prediction. Yeah, this one, get rid of a comment that that's working when we have a canvas multiple readback operations using get image data faster with the attribute to true. I'm not sure what that means. And a good thing about having small files is that it can actually troubleshoot better. We are compiled as using GPT-4, but the warning message you're seeing is just that you set will read frequency attribute to true on your canvas. If you're going to be calling get image data multiple times, this can improve the performance of readback operation. It sounds important. You can set that attribute when you get the context of the canvas here to do it. So in the main.js, you should update the line where you get the context, the canvas to look like this, show exactly what it does. But okay, is that the warning went away? The split screen and the split view toggle button is not working correctly. Can you have a look? I don't know if it will even find. So I think it needs to update the CSS as well. So the issue might be due to the CSS positioning properties. When you set a position on the canvas to study, you can absolutely are taking out of the normal document flow, which might cause layout issues. Instead, you should change the position property of the parent container. A video container based on the checkbox tells you how to do it. Okay. Not sure about the row and column thing, but happy to try it out. And then the CSS, we also need to video content. Video content is just called video. A block column center. So those will be the default values, right? Yeah, pop that in. Wait, do we have video container and video? Okay, now confused. Confused. Is the rest of the file okay? Is this CSS file looks legit? I wonder if it has a context. That's okay. Let's show we have video container. And no, it doesn't seem to have a context of the previous prompts. So that's the big difference between gt4 and GitHub co-pilot. Right, so no, we do have video container. Yeah, default option. Okay, now it's different. Okay, now that doing them side by side, it's not exactly what I meant. I meant they meant to be overlaid or not. Okay, it's still not working correctly. They meant to be default overlaid. The canvas meant to be overlaid over the video by default. And when you click split view, it should split them whichever way side by side or top to bottom depending on the size of the screen. Okay, it's very apologetic. I don't know if it's a good sign or not. Probably not. Yeah, we want the default state to be overlaid. It's just in changes to the HTML. Just script. This code should give you the desired behavior of the video. And canvas will be overlaid by default. And when you check the split view, check box, they will split either side by side or top to bottom depending on the screen size. Okay, it sounds too good to be true. We are happy to try it out. The checkbox is below. That's fine. We don't care really where it is. We just want it to work. And yeah, it will fix the indentation. Yeah, right. So we have the checkbox below the video now. And let's save that. Then in this get ID element split view toggle. Yeah, so they do look overlaid. Yeah, it's, and when we hit split, it's split. Okay, GitHub Copilot actually does useful stuff. How good is that? Okay, now the button, the split view button. Oh, no. Okay, the text still messed up. So it's working okay to begin with. Yeah, probably want the split button at the top. Yes, we have the split button at the top. The results of the eye tracking are overlaid. And we can do this. However, when we go back and tick it, it doesn't go back to the original view. Okay, it is working much better now. Default is working perfectly well. When I hit the split view, it's working once as expected. The canvas is below the video footage. But when I un-tick the split view, the result is not correct. It's apologizing. Apologizing again. When you check the split view box, we need to ensure that the canvas is again overlaid on top of the video. This issue might be due to the fact that when we set the position of the canvas back to absolute, it doesn't know what to be positioned relative to. We can fix this by setting the position of the video to relative when the checkbox is unchecked. This way, the canvas will be positioned relative to the video. Still makes sense? I don't know. Maybe. This should give you the desired behavior. The video and canvas will be overlaid by default. When you check the split view, they will split the other side. When you uncheck the split view, the canvas will again be overlaid. Okay, it's a big promise. Brought to you by GitHub Co-Pilot. Would it actually work? It's advertised. So it's working fine to begin with. We have the eye tracking overlaid. Yes, it's still rubbish. We still have the same problem. Do we need to update the CSS file as well? We'll just get rid of all of that. Try canvas. Shouldn't you not need anything for the tool description? Comment out properly, can you? Copy the CSS again and change the main JS. It's like the fifth time we are changing this. We have it overlaid. Okay, it's a bit odd, but it's better. Default is split view turned off. So it is overlaid. Then you can split it that way. It's a bit unusual, but it's okay. You can move on, I think. You can delete all that. Some main JS. By the way, these sessions meant to be interactive. So go check the site. Let me know what you think or pop your comments in of what I'm currently doing wrong. Surely you have a lot of opinions. This is a good time to share it. Just take a quick break. Come back in a sec. Where's the graying out bit? Can you explain how the eye tracking works? I meant the pupil tracking. Here we have the video stream phase detection, eye detection, pupil detection. That's the one we are after. It's not trained on anything. That's bull. Okay, can you explain how the pupil detection works with links to the relevant code? Here we have the pupil function. Why is this not clickable? It doesn't need to finish generating, does it? We have the eye detection, then we have the images extraction, then we have the functions likely. Why is it likely? Why not specific? Pupil detection function returns object, darkest point, representing. It's actually wrong. I'll just use images. That's okay. It's just a guess. Yeah, can you do it again? Now you have the code. I don't know why it does this funny thing where it will say I don't have access to the code. Yeah, the darkest point detection. It turns it into gray scale. Just take the red channel. What doesn't make any sense? Since it's gray scale, we can just take the red channel. Is that what it's doing? Yeah, what another gray scale, darkest point. How do we do this? I don't think that's the darkest point. Darkest point is not a function. Oops, made the... Yeah, this is why you need a proper version control. Let's check. Yeah, we were by mistake modifying tracking copy. This was modified Monday. So this is old, and this one is the... Let's make sure it's working correctly. All errors. Yeah, we can split or overlay image processing edge. Okay, doing edge detection, pupil classification. Wow, what are they doing? I meant in the JavaScript. Yeah, that's what we had before in Python. Unfortunately, with GitHub Copa, they can just chat to it like a normal person, or as if it's a person. It will only essentially prompt this individual. Yeah, this message is too long, 350 lines of code. That's why I was getting rid of comments and the like. Yeah, we need those for... If we load it into GPT-4, so it knows where while one file starts, when it ends. That's kind of obvious. Yeah, we don't need this. Yeah, we do want to keep this odd comment. That's the code that we will be going over there, converting to grayscale and stuff and trying to improve the eye tracking. Well, let me know if that's ready for publishing. I'm happy to share and make it available for you to play with. It's the radius of the eye, may need adjustment and JS here bringing in... What's the structure of the files? Where is tracking JS being called? It's not in main. This is super weird. Wait a sec. That looks like... Didn't have the tracking originally. Tracking JS has stuffed up the whole structure of this project. If... Can it say a getCopal to explain this while... I'll actually need this, not using this. Stuffed it up. That's the one that we do not need anymore, because we split it into... Yeah, these two can be moved out. We're not actually using them to know how to get back. Paste detection, draw units face more. Yeah, the pupil detection is happening here and this is only 80, 90 lines of code. It has two functions. Extract eye region and detect pupil. Sounds legit. Extract eye regions gives two coordinates for right and left eyes that's working fine. And then detect pupil is a bit more complex, you bet. Well, maybe it shouldn't. Maybe it doesn't have to be. Trans the coordinates to integers retrieves the image that are converting each pixel into grayscale. So that's what we would like to see. How is that actually happening? Yes, it's finding darkest pixels in the ROI, in the region of interest. Yeah, two things we want to be able to change that threshold. What magic numbers are used in the code? So this kind of makes sense. You have the height, its percentage of the face, 0.2, 0.1, it's taking, it's scaling as well, so that's good. In the detect pupil function, 100 is used as a threshold determining whether the pixels consider dark or not. Yes, we have those. It's not generating the whole thing. Right, it's looking at coefficients. That's interesting. Do I need those as input parameters? So do we need those as controllers on the page thinking the darkness threshold to begin with? Yeah, this is obviously worse when going higher. It was 50 to begin with. Now we made it. Yeah, this size working better. The other one is off. So I don't know, is it just my eyes? Yeah, that's why I probably should just release this tool. It's doing this for the each eye. Is it too long? Yes, that's the problem with this algorithm. I thought it's only searching for a circle. Don't think it does. Yeah, it just does the grayscale stuff. It doesn't do the circle. What's called the hue transform? Finding circles? No, it doesn't look like. We had it in another version of the code. Just turning into grayscale and finding the darkest point for the pupil detection. But yes, we would like to add another stage. Now this jumping with right or left pupil not detected. So 100 was better in that sense. Right, so we have a threshold of 100. We have this gazillion of grayscale data points. Are they? Yeah, like 330. It's quite a bit, isn't it? 500. You might pop all those onto the screen as input variables. We can adjust these values. We have some default that work better for me. And just release this tool. I don't want to spend much more time on this darkness threshold. Sounds scary. Oops. Ah, and the hue transform. Yeah, that should prevent it by finding a pupil when the eyes are closed. And that's what we could use for detecting a blink as well. If there is no circle, then there must be a blink type of scenario. Just take another quick break. If you haven't checked bionickills.com, it's a good time to do so, because then you can provide your feedback live as well. All right, let's ensure that your spit view table works correctly and the eye tracking results are displayed dynamically on the front end. Here's what we'll do. Modify the event listener for the spit view toggle to switch between overlay and side by side views. Ensure the eye tracking results are updated in real time on the web page. Let's start with the toggle functionality. Since you've added a flex direction change based on the windows aspect ratio, I'll keep that consideration in the code. JavaScript changes. CSS changes. You might need to adjust your CSS to allow for flexibility when the toggle is switched. Here's a suggestion on how you could structure your CSS. HTML changes. Ensure your checkbox label is clearly associated with the checkbox. Dynamic eye tracking results. Your JavaScript code for the detect and draw function already updates the eye positions in the DOM. Ensure that you also update the other results, like blinks and gaze direction, in the same way. This can be done within the if conditions where you check if right pupil and left pupil are detected. The sum condition for blink would be a boolean that you determine based on the predictions from your eye tracking model. These adjustments should give you the toggle functionality you're looking for and update the results on the front end in real time. Remember to test thoroughly and adjust as necessary to achieve the desired behavior and layout. Your JavaScript code for the detect and draw function already updates the eye positions in the DOM. Ensure that you also update the other results, like blinks and gaze direction, in the same way. This can be done within the if conditions where you check if right pupil and left pupil are detected. Your JavaScript code for the detect and draw function already updates the eye positions in the DOM. Ensure that you also update the other results, like blinks and gaze direction, in the same way. This can be done within the if conditions where you check if right pupil and left pupil are detected. Can we look at the rest of the code? Don't worry about the, well maybe do worry, no the display is okay now. The front page is working, is working fine. Now we, in the past, don't remember, I think in the Python code we had a hue transform that will detect the circle within the eye, currently just doing the gray scale. So we might look into bringing that back. Also would like to list all the magic numbers in the code and place them as variables, input variables on the front page. I need help. I need your help to do that. Yep. Great, let's tackle this step by step. We will. Identify all the magic numbers in your JavaScript code. Create configurable inputs for these magic numbers on the front end. Discuss how to integrate a more sophisticated pupil detection algorithm, similar to what you had in Python. In your JavaScript code, here are the magic numbers. Eye region size, 0.2 and 0.1 for calculating eye width and eye height. Pupil detection threshold, 100 used for gray scale thresholding. Landmark point size, 2.5 for offset and 5 for square size in draw landmarks. Pupil radius multiplier, 0.1 used to calculate the radius of the drawn pupil. Yes, we have all these magic numbers in the code. For each magic number, we will need an input on the front end. You can add input fields to your HTML for each parameter. Here's an example of how you might add these. Is the code on my GitHub? Well, yes and no. Some of it is there. Maybe if the robot can fix my GitHub, that would be great. In your JavaScript, you will then fetch these values and use them in your functions. Here's a function that updates these settings. There is some of the code there. Yes, generally it's all open source, so if the application is JavaScript, I mean it can fetch it off the website itself. Remember to update your existing functions to either accept these parameters or use the variables set by this function. If there is a Python back end, well then... As for the pupil detection, you mentioned a few transform in Python, which I assume refers to half circle transform or similar algorithm used for detecting circular shapes. To integrate such an algorithm, you typically need an edge detection step followed by the circle detection. TensorFlow. JS doesn't natively support these operations as they are more common in OpenCV. For this, you might consider using TFJS to run an equivalent machine learning model that's been trained to detect pupils. Compiling OpenCV to WebAssembly and using it in the browser alongside your TensorFlow, JS model, offloading the processing to a server that runs Python with OpenCV and then sends back the results to your front end. This last step would involve significant changes to your application architecture and is quite an advanced topic. If you are interested in this, you would typically set up a server endpoint to process images and return the pupil coordinates. You would then modify your detect pupil function to send a frame to this end point and handle the response. Absolutely, if you prefer to keep the processing client side in JavaScript and avoid heavy back end computations, let's focus on that. Regarding the half circle transform in JavaScript, it indeed is possible to implement such an algorithm on the client side, but it's not natively supported by TensorFlow, JS or basic HTML5 canvas APAS. You will have to implement the algorithm manually in JavaScript, which can be quite complex and computationally intensive for a browser environment. However, there are some libraries and approaches that can help you with this task. Using a JavaScript computer vision library, libraries such as Tracking, JS have some capabilities for feature detection that might be extended to detect circles or at least assist in the process. Grayscale conversion, which you're already doing. Edge detection, possibly using a Sobo operator or CanE edge detector. Accumulator space creation and peak finding to detect circles. This approach would be quite intensive in terms of development and processing power required on the client side. Here's a rough outline of what the code structure might look like if it would implement a half circle transform in JavaScript. Implementing this correctly requires a good understanding of image processing algorithms and might be overkill for a lightweight client side application. Simplifying the approach, instead of implementing a full half transform, consider using simpler heuristics that are less accurate but also less intensive. For example, you can improve your current grayscale thresholding approach by also considering the circularity of the dark regions, e.g., by comparing the area to the bounding box aspect ratio. The relative position within the eye region, expecting the pupil to be centrally located. This would still be an approximation but could improve the robustness of your current implementation without the overhead of a full half transform. Given the complexity of implementing half circle transform in JavaScript and the performance limitations on the client side, I would recommend starting with the third option and enhancing your current pupil detection with additional heuristics. If the accuracy is still insufficient, then you might consider either offloading to a server or using a more comprehensive computer vision library that can run in the browser. Clear. Again, we would like to avoid transferring to the server. Can you actually populate the suggested functions, particularly the hue circle transform? Another thing we could do is currently it seems to be being thrown off by the eyelids. So it's detecting the dark regions looking at the eyelids as well. Can we improve that? So yes, essentially some sort of algorithm that finds a white, the white of the eye and the pupil within it. Somehow avoiding eyelids. Eye leads. Eye leads. Eye leads. That's right. And can you also recommend what the controls of the magic numbers that we have in the code should be placed as controllers on the front panel, specific to the pupil detection. The obvious candidate will be the darkness threshold, which seems to be impacted by the amount of incoming ambient light. Here is the code again, just in case. In your JavaScript based eye tracking system, you're looking for ways to improve pupil detection, particularly to avoid false detections due to eyelids and to manage the effects of ambient light. You also want to control the magic numbers through user interface. Improving pupil detection. The current grayscale thresholding approach can indeed be thrown off by eyelids because they create dark regions. To improve this, consider these strategies. Dynamic thresholding. Instead of a fixed threshold value, use a dynamic value that can adjust to the average brightness of the eye region. This helps account for different lighting conditions. Morphological operations. After thresholding, use morphological operations like opening and closing to remove small dark spots and to close small gaps in detected regions. Contour analysis. When re-image from thresholding, look for contours and analyze their shapes. The pupil will be a relatively circular and dark area in the center of the eye region. Region of interest adjustment. Adjust the region of interest for the eye to exclude the areas typically covered by the eyelids. Edge detection. Before applying a half circle transform, perform an edge detection to outline the boundaries of the pupil more clearly. Implementing half circle transform. While you can implement a half circle transform in JavaScript, doing so efficiently in a browser environment is non-trivial. It involves several steps, such as edge detection, creating an accumulator for circle parameters, and finding local maxima in that accumulator. Here's a basic outline of how a half circle transform could be structured. Front panel controls for magic numbers. For your application's front-end controls, here are some suggestions. Grayscale threshold, control. Control type, slider. Range, 0 to 255. Default value, 100, or based on dynamic thresholding if implemented. Control type, slider. Range, 0 to 255. Default value, 50, this is just a suggested starting value. Control type, two sliders, for minimum and maximum radius. Range, based on the expected size of pupils in pixels. Default values, 10, min, 30, max. Control type, slider. Range, determined by the size of the accumulator and the number of edge pixels. Default value, dependent on the specific implementation. Control type, sliders for top, bottom, left, and right margins. Range, 0 to the width, height of the eye region. Default values, 04. These controls would be tied to JavaScript functions that can update the parameters used in your detection algorithms in real-time. Remember, implementing a half-circle transform in JavaScript and running it in a browser will be computationally expensive and may not perform well on all devices, especially without WebAssembly optimizations or similar techniques. The contour analysis and dynamic thresholding are likely to be more efficient while still improving the accuracy of your pupil detection. While you can implement a half-circle transform in JavaScript, being so efficient in a browser environment is non-trivial. It involves several steps, such as edge detection, creating an accumulator for circle parameters, and finding local maxima in that accumulator. Here's a basic outline of how a half-circle transform could be structured. We'll have trouble integrating this code, wouldn't we? We're not receiving any errors in JavaScript, which is good. Should we? I don't know. Let me know what you think. Do we need... Do we want to control how the pupil detection is happening? Where would this huge circle transform go in a separate file for it? Yeah, we eventually want to get rid of all these magic numbers. Ah, this is really not cool. So I have this shortcut in OBS. Yeah, I just realized it was messing up my code. Yeah, I was adding amps. I added a shift M shortcut for muting my microphone. Now I'm using control shift for muting and muting my microphone, but then it's also loading something in Visual Studio Code. But at least it's not adding text to the code. Ah, you can't win with shortcuts, can you? It's really like a yeah, control shift M, but should be pretty safe. But it is used in Visual Studio Code for something. I can't have a proper decent shortcut for it, so just use the mouse. In your JavaScript-based eye tracking system, you're looking for ways to improve pupil detection, particularly to avoid false detections due to eyelids and to manage the effects of ambient light. You also want to control the magic numbers through user interface. Improving pupil detection, the current grayscale thresholding approach can indeed be thrown off by eyelids because they create dark regions. To improve this, consider these strategies. Dynamic thresholding, instead of a fixed threshold value, use a dynamic value that can adjust the average brightness of the eye region. This helps account for different lighting conditions. Morphological operations, after thresholding, use morphological operations like opening and closing to remove small dark spots and to close small gaps in detected regions. Contour analysis, once you have a binary image from thresholding, look for contours and analyze their shapes. The pupil will typically be a relatively circular and dark area in the center of the eye region. Region of interest adjustment, adjust the region of interest for the eye to exclude the areas typically covered by the eyelids. Edge detection, before applying a half circle transform, perform an edge detection to outline the boundaries of the pupil more clearly. Implementing half circle transform, while you can implement a half circle transform in JavaScript, being so efficiently in a browser environment is non-trivial. It involves several steps, such as edge detection, creating an accumulator for circle parameters, and finding local maxima in that accumulator. Here's a basic outline of how a half circle transform could be structured. JavaScript, copy code function half circle transform, image data, edge threshold, radius range. Perform edge detection, e.g., using the canny edge detector. Let edges equals canny edge detector, image data, edge threshold. Okay, that's obviously not the full implementation of the code. And let's see, is this a camera later storing votes for circle parameters? What that function will look like? Vote question for each edge pixel vote for all possible circles that could pass through it. And it has the radius range as an input. Yeah, it's a bit odd. It doesn't sound right. She will just stick to the darkest region. I mean, it's working. It's not working too badly. Yeah, the problem with this JavaScript and the implementation that yes, obviously, you can do much better in Python. The main reason, I mean, I could probably do both. Yeah, we currently set up, we currently don't have any backend server side processing. Yeah, one of those magic number variables to be outside the function. Let's worry about the eye detection later. Do the, yeah, these two can go here. The other two can go there. If I just say update, right? Well, it's actually a D dot K. Yeah, that's the eye width and the height ratio. Just hit the width. No, it should be okay. Yeah, that's what you're just trying to take control over any magic numbers that we have. That's doing, finding the grayscale, the green coefficient there. We have the blue coefficient as well. Well, eventually we're thinking adding them as input parameters on the front page for you to play with. But here we're trying to use what the GPT-4 generator test prompts for GitHub co-pilot. So it's actually does the coding. Let's see if it does any better. It's same using this Kenny edge detector. Is it something we need to add like a library for it or something? Suggesting an extra function. So this is from a GitHub and this is from GPT-4. We can try having the hue circle transform to have an option to turn it on and off. See what difference it makes. Yeah, obviously slightly different. Yeah, like the second one better in terms of its inputs. And this parameter's edge threshold and radius range should be coming from the front panel. So ideally we'll have what's that hue circle transform on or off option. Okay, I need your help integrating the code. Can you generate the full code? Which parts of the code needs updating? Can you generate full functions? Update the HTML and the main JS as well. To integrate the changes and improvements into your code, let's address each part of the system. Face detection, JS, main, JS, and the HTML front end. The provided huff circle transform function is an outline and needs the actual implementation of its components like Kenny edge detector, create accumulator, vote for circles, and find local maxima. These are complex algorithms and would require a significant amount of code, which goes beyond the scope of a single response. However, I can guide you on how to structure these functions conceptually. You'll have to fill in the algorithmic details or use a library that implements them. For your current pupil detection, based on grayscale thresholding, you can enhance the algorithm by filtering the dark points to remove those that are not within the expected range of the pupil's position. This can involve checking if the dark point is within a central region of the eye and not too close to the edges, which could belong to the eyelids. In your main, JS, you need to connect the UI controls with your detection functions. For this, you need to create functions that read the values from the UI and update the constants used in face detection, JS. Here's an example of how you can set up these connections. Add input elements to control the magic numbers. By connecting these controls to your JavaScript code, you provide a way for users to fine-tune the pupil detection to their specific environment and lighting conditions. This will require thorough testing to ensure the dynamic updating of parameters works is intended and that the pupil detection is robust across different scenarios. Remember, a full implementation of the half-transform in JavaScript is a significant task and may be too intensive for real-time applications running in a web browser. Consider starting with simpler enhancements to your existing grayscale thresholding algorithm and proceed to more complex algorithms only if necessary. That's a good suggestion. By the robot knows what it's talking about. This function is not being used, so in theory, not be getting any errors. Jump check with control F5. I do want to add some controls. Detection parameters don't have to be updated on League today. By connecting these controls to your JavaScript code, you provide a way for users to fine-tune the pupil detection to their specific environment and lighting conditions. This will require thorough testing to ensure the dynamic updating of parameters works is intended and that the pupil detection is robust across different scenarios. Where do we want the controls? Can this be a different button? Yeah, I like a button like that. We need to greedy. I'm getting ads for dog food. Under label, right, labels, where are they? Oops, let's just save. Here they are. What that, later, will move this, believe it. View in there. Right, parameters. Doing anything, are they? We also have the red, green, blue coefficients. This is main.js updating parameters. This is the end of main.js. That's fine. We have any issues. Must have a lot of issues with this code. We do not have a DOM element. We need to restructure the code. Let's take a break first. One of those filters. Well, I never use many of those, but it makes your eyes bigger. But this is an illusion. It just selects a different region of interest. That's all. Now that will be the default option. And the threshold doesn't work yet. Split view works. Yeah, if you want a wider screen. It will place them side by side. So that's okay. You have the left, right, the eye position. We don't have a blink detector yet. Don't have a gaze direction yet. We will improve this description. What is it? What is it? Webcam eye tracking system is a tool that uses a webcam to track the user's eye movements. The tool uses the TensorFlow. JS library to detect the user's face and eyes and then tracks the movement of the eyes in real time. The tool can be used for a variety of applications, including user interface design, human-computer interaction and accessibility. Yes, we obviously don't need a reset button or anything. We just hit F5, resets the ratios. So it's working okay to modify the darkness threshold to work as well. When we detect pupil, we have a darkness threshold as an input now. And also these three coefficients, red, green, blue, because they affect the grayscale detection thing. Let me ask GPT 3.5. Those coefficients you've got there are commonly used in the RGB to grayscale conversion process. They represent the relative luminance of each color channel in the human eyes perception. The values are based on how sensitive our eyes are to different colors. When you're converting a color image to grayscale, you multiply the intensity values of each channel, red, green, blue, by these coefficients and sum them up to get the grayscale intensity. It's a neat trick to capture the essence of an image in black and white. I assume these numbers are commonly used, or do I need an option to adjust them slightly? These coefficients are commonly used as they approximate the human eye's sensitivity to different colors. However, depending on specific applications or preferences, you might want to adjust them slightly. For instance, if you're working with images where certain colors are more important, you could tweak the coefficients to emphasize those colors more in the grayscale conversion. It's all about fine-tuning to achieve the desired result for your particular use case. So having an option to adjust them slightly can be beneficial for customization. How about specifically for eye tracking and pupil detection that uses a grayscale conversion of a video feed? Do these numbers sound legit, or do I need an option to modify them? For eye tracking and pupil detection, the grayscale conversion is crucial since it simplifies the image for easier processing. The coefficients you mentioned, 0.299 for red, 0.587 for green, and 0.114 for blue are commonly used in many image processing applications, including eye tracking. However, since the success of pupil detection can be influenced by various factors like lighting conditions and skin tone, having the option to modify these coefficients might be beneficial. It allows for fine-tuning the conversion process to better suit the specific characteristics of the images you're working with, potentially improving the accuracy of the pupil detection algorithm. So yes, providing an option to adjust these coefficients could be helpful in optimizing the performance of your eye tracking system. Eh, right. It's funny how that when I close my eye, I can still detect pupils. Well, I can't see that, obviously. It's a do. Eh, print screen. Not ideal. Not ideal at all. Can you detect what the problem with this tool might be and suggest any solutions? Yep. The images you've uploaded appear to show facial detection or recognition boxes, typically used in biometric analysis or image processing software. The red and blue markers likely represent key points used to identify facial features. The problem might be that the software is not consistently detecting the facial features across different images. Here are a few potential issues and solutions. Inconsistent lighting, the software could be sensitive to lighting conditions. Ensure uniform lighting in the environment where images are captured. Facial orientation, if the software is designed for front-facing portraits, slight head tilts could throw off detection. You'd want to calibrate the tool to handle various head positions or instruct users to maintain a straight posture. Feature detection algorithm. The algorithm might need optimization. For example, machine learning models may require more diverse training data to improve accuracy. Software calibration. There could be a need for recalibration of the software to better define the threshold values for feature detection. Resolution and quality of images. The tool may not perform well with low-resolution images. Using high-quality images could resolve this. Real-time correction, implement real-time feedback within the tool to prompt users to adjust their position or environment for better detection. Update and patching. Ensure that the software is up-to-date with the latest patches that may fix known bugs or improve feature detection. To proceed, you would typically collect more data on when and how the failures occur, run diagnostics on the software, and perhaps consider control tests with varied conditions to pinpoint the exact problem. The function is currently not used. That's okay. Now the split should work again. And that's working. Okay, what else? Yeah, LinkedIn will throw us out because the live streams are limited to four hours. So, we should say bye-bye to the LinkedIn viewers. And we might finish up anyway. But the detection, I don't actually want to use this function. And we don't need that, we don't need this. Let's see. Quickly, quickly, quickly, we don't have time. Yes, split toggle works. We eventually want to get rid of this function. We have... Where is it? The detector ball. Yep, have this too. Yep, that's using darkness threshold as well. Darkness threshold is getting element from the front panel. See if it works. Split works. The high, high threshold works. Right, this is working as well. This is actually quite nice because you can... How come it's still working? Yeah, so if this one, it will become super sensitive, so it should be. So, it's actually working pretty well. Quite happy with that. The split view. Let's see if we can... We don't have any errors. That's okay. So, we have the high width ratio, the high ratio, the darkness threshold. Oops, that's the high. Yeah, this one actually... Anyway, so the section shows you how well it's working or not. So, it's converting the pixels to within that region into grayscale. It's pretty good. Don't think that number should be zero. Let's double check that number for darkness. Minimum is 50. Maximum is 150. Steps of one. And default is 100. Yeah. Should be adjusted by high. It's still pretty decent. Well, teaches you quite a lot. It taught me quite a bit. That's for sure. So, hopefully, it will do the same to you. And we hope to publish it soon. Yeah, the pupil was also taking red coefficient, green coefficient, and blue coefficient. But that's only to do without the grayscale conversion being made. So, I don't know. It probably doesn't make much sense to make it available on the front page. Yes, you can choose your parameters, whichever work best for you. And Bob's your uncle might need that separate adjustment for left and right eye. But yeah, it's looking pretty good. So, I'll see you next time. Hopefully, next time we can do one of the other tools. We actually had the GitHub co-pilot doing some code changes for us. And it's working really well now. So, I know you don't have the updated code, but maybe you can give a summary, an overall summary for the project. And considering you're co-hosting this YouTube channel, you can say goodbye to the viewers. Yep, that's the wrong. Absolutely. I can help sum up the project and close out for the viewers. Today, we took a deep dive into an exciting project, a webcam-based eye-tracking system using JavaScript and TensorFlow, JS. We explored the intricacies of facial landmark detection and pupil tracking, tackling challenges like dynamic lighting conditions and occlusions caused by eyelids. Identifying and planning to control magic numbers in a code to enhance adaptability. Discussing the potential of implementing a half circle transform algorithm in JavaScript for more accurate pupil detection. Contemplating strategies to keep the application light and client side, favoring real-time performance and accessibility. We also considered ways to integrate user control parameters into the application, allowing users to fine-tune the eye-tracking system for their environment, which is crucial for diverse lighting and usage scenarios. In the spirit of continuous improvement and community contribution, it seems GitHub Co-Pilot played a role in advancing the codebase. This project stands as a testament to the collaborative nature of development, lending eye assistance with developer insight. As we wrap up this session, we encourage viewers to engage with the project, contribute their ideas, and experiment with the code. Remember, open source is not just about freely accessible code, but also about the community that grows around it. If you're watching this on YouTube, don't forget to hit that like button, subscribe for more tech deep dives, and share your thoughts in the comments below. We're excited to see where this project goes next and how it evolves with the contributions of the developer community. Signing off from this coding adventure, keep it rating, keep innovating, and happy coding.