 So there's a bunch of tool on the front page. Go check it out in that note you think. Start today with eye tracking. Should be a quick one because we'll probably get stuck pretty quickly. So currently we have this Python script that works pretty well. However, you issues with it. That's the bit that actually does the eye tracking. It's calculating the center of the eye, finding the pupil while looking at this region of interest and doing grayscale transfer and adjusting brightness and using this hue as circle transformation to actually find the pupil inside the eye and then it's looking at the darkest spot to label the pupil. And it kind of works on one eye but not so much on the other. So the red dot is just the center of the region of interest and the green one is where the pupil is being found. Well, that's where the darkest spot inside that region of interest is and sometimes it even works but yeah, obviously needs more improvement. This one is using a lot of my CPU, GPU and the FPS is only at 15. So normally when I started the first time it was 30 as it should be. I actually have, as you can tell, two cameras connected up. So I was thinking using one for each eye as well. And this one will stop working as well. If I move outside the field of view it will give an error. Yeah, empty array, the usual stuff. I could barely easily fix that but we actually decided to start fresh and start a new application where we primarily rely on JavaScript instead of Python libraries. So we would not be using media pipe. It will be still a fast application. So there is a potential to have something running in the backend but primarily relying on JavaScript to reduce my electricity cost. Because if it's in the front end then you pay for electricity because it's running on your device. But then obviously the quality of it will depend on your memory and so on and so forth. Yeah, and the idea is that eventually we'll have some sort of calibration thing and I do believe that we could show that with the right calibration with a simple webcam running at whatever, 30, 60 or even 450 FPS frames per second we can achieve some good results kind of comparable to systems that cost, I would imagine, tens of, if not hundreds of thousands of dollars. But yes, we'll need some sort of calibration procedure where you follow some sort of dots that appear on the screen and potentially it won't be human calibrated. So you click, yes, I'm happy with that kind of thing. That's what I'll be happy with. So not fully automated but semi automated where it tells you to look around, keep your head still, look around the screen like that and do some snapshots and then hit yes, yes, I'm happy with that kind of thing. So might close this one and move to the second version of it. Yeah, so you see when I close the Python application server it's still running in my browser because everything is being loaded into the browser and you will also have all the source code just by looking at the browser. Yeah, to run it from the get go to see everything. But you get the idea. But if I refresh this, you'll go open a current application. So this will primarily rely on JavaScript. This will have, it will still be in the form of a Flask application. So later, if need be, we have an option to run things in the background on the backend and it's doing the eye detection here at the bottom. So not great. And also I do wanna use Chrome because I kinda trust a little bit more. Edge should be fine as well, I guess. And yes, obviously we need to improve this one quite a bit. This is where the eyes are. And we are from memory getting an error. Fully one of the bots will be able to fix this for us. How about we do a single shot prompt for this? And by the way, GitHub Copa was not able to manage it, fix it. So unfortunately have to a copy paste. So we have our HTML, we have our CSS. Yeah, so that's the project file and folder structure. Yeah, that looks legit. And now the main thing that does all the heavy lifting is this JavaScript file. As opposed to the previous version where we had the Python doing everything, running it in a threaded mode should be fine. And let's prompt it some more of it. So I shared all the code, this is obviously as you can tell and I tracking a flash application. We would like to mainly rely on JavaScript on front end. So we use less electricity. So it's working in terms of detecting the eyes, but the pupil detection is not currently working. We're getting the following error. Get the eye bounding box suggesting a correction for it. Might comment this out quickly. And this one is ensuring that coordinates and dimensions are integers within the bounds. Sounds legit, but as we know, sometimes it only sounds legit, but actually does some rubbish. In this case, it's good that we can actually test if it's working or not. If you can keep the jokes more relevant to the subject at hand, that will be great. I was still trying to solve the error. That's something basic that this isn't working. We had the GitHub copilot making some changes. We'll have a constant landmarks somewhere. Need to ensure that landmarks array pass to the extract eye regions function and it has the right structure. Yeah, copilot is out of whack. Don't have that function. No, I do. Just call something else, isn't it? Right, so the landmarks should have X, Y, width and height for each eye, is it? So it was definitely working better before. Okay, so it seemed to be tracking the eye now. There is only one eye being displayed below the video footage. The X and Y coordinates seem to be working okay. The width and height is not actually doing anything. Do we actually need it? So the question is, where's the other eye? Where's the original interest? I guess that's the little square. Is it too small potentially? Look at the image provided. I was hoping eventually all the labels will be overlaid on top of the video footage, but we eventually want all the processing of the video to be displayed as well. So the grayscaling, circle finding, darker regions. We want to see everything on the front panel. So using this place face tensorflow, so I don't know how well is that working? Guess who's been doing essentially percent of the face size, width and height. That looks legit with doing face detection as well. We will eventually show the whole process to detect faces. Yeah, when it does the rest of the code stuff, it's better having trouble integrating if prediction length is more than zero. So essentially there's a face on the screen. So they extract the eyes region now, taking the face sizes and input. Okay, now the width and height is changing. The eyes a bit off, actually kind of like it instead of overlaying stuff on top of the image, doing it separately, maybe that's a better way to go. Where's the AVA eye? We are missing the AVA eye, and but now it's working better. The numbers, the width and height, including X and Y, are adjusting correctly. Check out the image that I added. And we'll give you the whole code again because we had the GitHub co-pilot doing some adjustments as well. But if you would like to take over in terms of code development, go for it and make funny comments if you feel like it, but keep them relevant to the subject at hand. Yes, it does work. Says it does really include the includes and agreements. Here's how it works. So it's good. It's explaining what we did before. Question is, where's the second eye? So how do we have one eye detected? And even that being a bit off, we need the second eye. Yeah, then start labeling pupils in it as well. Yeah, so it gets that the accuracy is not great. I didn't say it, but just gets the context. Use a more accurate model. Okay. Right, so this would be Python, isn't it? So we're currently not using a media pipe. What are we using? We're using TensorFlow Blazeface and TensorFlow.js. Yes, the previous code was using a media pipe, but we decided not to go with it. Okay, I'm not getting any errors for it, am I? Oops, no, that's, we don't wanna debug OpenAI website. We don't get any errors in JavaScript and we mainly do in JavaScript. So if there were any errors, it would be in JavaScript. Yes, we have that little eye over there that like expands as you move closer to the camera. That's good. But then it's off and we only have the one. I think it's the Java one. It is to console log, we're really doing console log. What are we currently logging? Prediction landmarks. Yeah, so how do we know which one is left and which one is right? I mean, Coppola should just be GPT4 anyway, right? But he gets very different responses for some reason. So it's a array of six subs array. Each contains two elements. This is a likely landmark. Likely, I need specifics. By the place face model, each sub array represents landmark on the face for the two elements being the X, Y coordinates of the landmark, right eye, left eye, nose, mouth, center, right ear, left ear. Okay, the function spec the first two landmarks. Is it even correct? To ensure that the eyes are being checked correctly, could they visualize the landmarks? That sounds good. This function array the way. And now we do have the text faces. Now we need the text faces to go to call, draw landmarks. Why not just do it inside, left eye, side for starting to draw new detections. Why are we clearing? Okay, let me do it over here. Yeah, we're getting stuck. So it is being called twice the draw eye, but then the canvas is cleared. And then we have the, this is the whole function it looks like. This will be async the text faces. Just comment it out. All right, so we have the stuff, the nose, the ears, the ears moving. Ah, it's actually pretty cool avatar. Could improve on it. Then question is to the pupil. No, the pupil to work the eyes are just the same. The pupil are just in the same position all the time. They have some warning. You know what that warning is about. Still have that eye detection that getting smaller, larger. That's good, except we don't have the second eye. Multiple frames, this one, we're still doing like 30 SPS, I would imagine. We'll display it on the screen. Canvas is being cleared at the start. It's this clear rectangle. Can just remove it. I would imagine nothing will get cleared if I remove this one. So we just keep overlaying. Yep, it's moving in and out, which is nice. Are the eyes just jumping from left to right? Obviously need to clear. Do we have a function draw? No. So why are you suggesting that? I suspect the two eyes are just being overlaid on one top of the other, the actual eyes. And right, that's a lot of stuff. There's still only one eye. This is what we get in the console log. And this is the current code that we have. Can we fix? So the other eye is visible. Also, the pupil labels are not working correctly. They don't seem to be moving within the eye. I uploaded a couple of images that show you how the interface looks like. There's another problem that the eye that is being displayed, only one of them, is out of focus. It's not actually focusing on the eye itself. In the little avatar, the pupils are not moving at all. The second eye is not being shown. And the two images are just being overlaid or something. So this one is a webcam-based eye tracking system. You need to still calibrate it and prove it. Eventually it will be available on bionicchaos.com for you to try out for three. Hopefully it works a bit better than this. Now we are also head about like 20 sessions developing this waveform feature extraction and detection for ECG, for the ECG game. So the ECG game is already out there. You can go and play it yourself. And then by labeling normal and abnormal ECG, and now we will soon have another version where there is a bot, a robot, a machine, whatever, learning, not learning. It's a fuzzy logic system. So it's all preset. It's a deterministic or more deterministic system that will play the game against you. So you can see how the robots will be taking over the medical time series data labeling field. So that's coming up. And another big thing is we also looking at some datasets publicly available. This one is the creative comments. This is what we wanna see. Please do publish your datasets and the creative comments license. This is quite forget whatever the, yeah, I don't wanna go into licensing, whatever, but yes, open source essentially. So this data is open source. Still had to register on the website to actually download it, but there is quite a lot of subjects. So there's about like thousands or so. Yeah, a thousand, it's the functional Connectom project website. So you're an imaging dataset. You have both EEG and MRI. And the patients are doing some sort of tasks. So we, this will be a big project to explore this dataset. So let me know what you think or if you have done, looked at this dataset before, do let me know. I downloaded some of the files. Apparently I downloaded the wrong ones. And GPT-4 provided some code that could open it. Eventually we would like to have like a full flash application that's opening this data, displaying it, processing it. So you can learn. Well, so I can learn.