 eye tracking thing that the GPT made for us. It's not working very well in the moment. Now obviously there's a difference between eye tracking and pupil tracking, so this one it doesn't do well. Eyes it's pretty spot on, especially if there's two of them. Yes, sometimes it's slowish to respond. If it doesn't see the other eye it's making assumption that the right eye is left eye, things like that, but it's a very cheap setup. I do have another camera so I was wondering if I could do a more advanced eye tracking with two webcams. Let me know what you think. Would a second webcam do a better job at eye detection or tracking? Like tracking is better. Yeah, so the code eye tracking but yeah those passive system systems they're obviously not just tracking the eyes, they're tracking the pupil with a single camera. What if we do workspace rewrite based on what's called eye tracking by file. The GPT-4 can do much better because it can hold context for the whole thing. Yeah it's really I'm sure it will write a much better description which actually will be more accurate as well. Yeah I like how this one has links in it as well that's handy. Yeah so asking to rewrite the HTML description. Again we can do a workspace. Can you write a description for this application including an about section, how to use section, include all the libraries that are being used and look at the other files like eye tracking, Python file and main.js to see how the application actually works so the description is as accurate as possible. So it's determining deciding which information to collect, gathering info. It's used five references. You know why that won't open maybe when it's finished generating. It's again going on about the circuits which we are not doing. Yeah that's correct. How to use, I mean it's running. It will eventually be running as a flask application so we don't actually, we might have the requirements while eventually might pop it on a github as well. We had some issues with gith in general. Yeah more about the first sophisticated product of leveraged weapon based eye tracking technology for remote experiments. Okay how to reach population maybe less potential. Yes remote eye tracking. Okay yeah I don't like the any mentions of cognitive tests and things. Yeah that description is not great. We might get the GPT-4 to write something better. This one meant to be using GPT-4 but it's obviously yeah it did look at these references but maybe if I delete the current description. Yeah it was looking at the type code actually. Yeah I think it will be easier to get the GPT-4 to do it. Yeah obviously it will have to be calibrated somehow. Currently this button doesn't actually work. There is a suggestion as to how a calibration system could have worked. See one eye is a bit off and quickly corrected here I would imagine. Double integer tuple tuple tuple tuple. I don't know how to pronounce it but it is self working. So it's working really well when I yeah it's always interesting. The right it's when I move to the right it's tracing really well when I move to the left it's lagging. It's really interesting. Must be something in the code. Again this type of context is better for a GPT-4 to handle. Yeah obviously if you cover you can try and swish your eyes away. By the way that's obviously what the NVIDIA eye tracking thing that the eye replacement thing does. It just draws your eye onto these little things. Yeah it can make them smaller and potentially have a circle around them. It's something more slightly more sophisticated. Process frames. Yeah we use the face mesh model. This is from a media pipe. Yeah one red dot over the pupil and a blue square over the entire eye. Right so now I have this a blue squares and the eyes are still inside. Here we have this. Obviously it can recognize what's going on in here. It's like can you find any issues with the output of this code? The pupils are not properly detected. Is there a better way of doing this? It is for a new function. Extracting the region of interest. Convert to gray scale and finding the darkest point and returning the location of the darkest point. Yeah those magic numbers obviously they will make this grid not generalizable. So if you use a different camera, a different angle it might not work anymore. Are we happy to try it out? So it's hopefully should correct the circles and processing frame. So instead of eyes we draw a pupils. Let me do it in... Yeah we keep the red dots where they are and in addition should have green ones where the pupil is. Yeah one of them is working surprisingly well. The other one keeps flickering around. It's like all over the shop. So it's working. It's really funny like one eye is probably made it is something wrong with my eye. Someone else has to try it as well. Which one is it? This one works. That's an edge condition when only one eye is visible. So yes it would not work. But yeah it works almost perfectly. Not perfectly but like 90% with one eye. And a bit funny with the other eye. So I guess the red dots are just the center of the square. Yeah so the one of the good seem to be working ways to go about the developing is using gpt and then essentially generating prompts for github co-pilot to use. Seem to work well and both of them are jumping around. Maybe another example. Is it the same or different? Well I can tell because I can look at the camera and the screen at the same time. If you had the NVIDIA what's called the thing the correction eye gaze correction whatever then you can but that's fake. I have two examples. If I turn on that light. No I still don't understand. I guess that's what the co-pilot made this code for us. We have these two examples. Yeah there's a couple things. So one eye is working better than the other. Like this one. Especially when I do this. Kind of both of them work. Might be the light. Yeah I have light coming from three different sources so that might make a lot of difference. Yeah this one works much better as you can tell. But as we're going through a couple of improvements we had some prompts so we might update the find pupil function looking at the ROI in the same way. Turning to gray scale. Set alpha circles, huge circle transform. We have to read about that. Might as well comment this out. Just make the whole thing a bit smaller. Pop that in for find pupil. No. Yeah I'm getting this fatal error. To restart the application. If I make any changes to the eye tracking Python code. Okay it's not much better is it? And a poof. NP not defined what? Oops. Let's obviously rock. We have um NP is a what? Numpy. Anything else? Minimum value is not being used. Let's try this again. Yeah so the red just stays in the center of the eye. Now this one works really well. That one not so much and I also have like two cameras I'm thinking if I should use the second camera like for correction or something. This is the kind of stuff we can fix with um a co-pilot. Apparently we just don't need that variable at all. The app is not currently running. There was another. A mean value is not used. We select the whole code. Pop it in. It's finding the darkest point. Should be working okay. It doesn't matter if I have it there or not. We do need to run the application. Yeah I'm just surprised that one eye is always better than the other. Maybe something with my eye. I need someone else testing this for me as well. Yeah we'll need to rewrite this description as well. The GPT currently only had access to um to the eye tracking by Python code. So I'm thinking a controller for the fine pupil size of the ROI. You want to control the magic numbers on the page. Yeah so it's that one eye is working better than the other. Be on viralcales.com. Go check it out if you haven't done so. Already there's a lot of tools that you might be interested in. The more recent ones are up that top and some of the old ones at the bottom they do not work anymore because we haven't transferred them into Python. They still um LabView. The videos are where the tool is actually working like the neural net and the fuzzy logic, these annotations statistical analysis. So these four they were originally written in LabView and they do not work anymore because we well don't have the license. Migrating everything into Python. The new tool that will come up will actually combine this ECG game and the fuzzy logic. It's called the CarioQuest bot. That's what the code looks like and yeah we need to turn it into a flask application.