 Photovolphyzmography, PPG, is a simple and low-cost optimal measurement method that is often used to detect blood volume changes in the micro-ascular bed of tissue. It is non-invasive and can be used to monitor heart rate, blood oxygen saturation, and other cardio-ascular metrics. One-two, one-two. The principle behind PPG is that blood absorbs light more than the shrouding tissue, so by shining a light source, usually LED, onto the skin and measuring the amount of light out of transmitted or reflected, one can gauge the blood flow dynamics in the tissue. Here's how PPG typically works. Light emission, a light source, such as an LED, emits light that penetrates the skin. Absorption variation, the blood vessels absorb varying amounts of light during the process that correspond to heartbeats, which is affected by the blood volume changes in the micro-ascular bed. Detection, a photodetector measures the light that is not absorbed by the blood. This signal fluctuates in time with the cardiac system. Okay, let's just do a sound check quickly and we'll start in a sec. I don't have much time today, so I'll jump into development straight away. Okay, I don't have to monitor my own sound, my own voice. I don't like it anyway. And let's do this weekly. So I had that blog of the Reddit. I did spend quite a bit of time making it, but yes, it was generally generated by GPT-4. Yet that image is not great. Let me know what you think of the layout and the general structure, the language. Yeah, I was tuning it down quite a bit. I don't remember how many prompts. I can probably check. And we're doing another one on PPG. Someone was asking what PPG is, so we'll have a whole blog on it. And yes, I'm more interested in all sorts of specific issues about the recordings themselves. So I don't care what it's actually called. We will actually have some practical examples. See what's what the advantages and disadvantages of different recording types, mainly to do with transmitted versus reflected recordings. Yes, googling helps quite a bit. Not sure what the bot will be able to generate in terms of images. Actually quite curious. But we might do this later. Yeah, I'm having all sorts of issues with my USB device because my microphone is not properly connected via another pre-amplifier. Talking about amplifiers. Sorry, I'm jumping around a lot. We also have now a blog on EEG hardware. Can open it at live server. That's not in production yet. But it's meant to be using, so thinking, I'll record my own EEG. I've done it many times before, so it should be more than capable to do that. But the problem is that my older gear, I can't use it anymore. I do have one of those old and I use the DAC, the data acquisition device. The problem is I can't connect the electrodes to it directly. I have to use a pre-amplifier and text instruments. A differential instrumentation amp could help a lot. And this is just so much generic. You don't need the key card or anything like that. To know why it even mentions it. Yeah, we need a good CMMR to know why the BIOS language there. Good common mode rejection ratio. Yes, you have to look it up. Or should I have a section in the... Should we focus on this or move on? Let's move on to proper development. At that too, it now looks better. That has now background, so it looks more like an output. Yeah, we don't like ads, but hopefully they're not too intrusive. They're now delayed, like by 10 seconds or so. I probably could delay them further. I was just playing around with the delay function. Which is done on my server end. Yeah, this need to be made clear that the filter order is only affecting the frequency domain. Now there's a description. The filter order is from 1 to 4. It's just the sharpness of the band filter. All the setting is at 2. Why do I have a field of feeling? That looks correct. This thing would not actually show you the value. Like the window size there. That little value there will actually change. This is in production, by the way, therefore. So you can go check it out. It's in this URL. Popped in the chat for you to play with. You can change your different types of weblets at the window size. There's a lot to play with. I'll leave that there for now. We have another blog. Hopefully all these blogs will eventually turn into web applications. So they will be more useful than just text. They will actually show something useful. Like EG signal quality. Yeah, we can do a lot. We can do a web application there. We'll see how we do it later. This application is in an earlier version. It's a simpler version of the EG data time and frequency domain. It's just scrolling through the file. You can select your channel. Same as the previous two. The newer two. This one here obviously has also options to make the signal less noisy. You can play with that. Okay, the ads are really annoying. Should be probably showing this on my what's called development server not in production. So the ads not popping up. What else? Let's go to the bottom quickly. Jump around this one. Yeah, those five do not currently work. So should I go out of these three? Any preferences for what I should be looking at first? Let's open them quickly side by side. Yeah, we had the original tool that used to work. We have some description to it that we can pop into a ChajBD as well. Let's try to weigh every show. We had something already. So this is feature extraction from images. I actually have some unique images that I could be trying this on. Let's see where is the feature extraction. Right, we don't have a description. That's that old LabVIEW HTML. It's actually whatever PHP. Yeah, I'm getting rid of them. Let's have an info text first. And this one could just have that. We had the original video. It actually shows how it's supposed to work. But this text can be popped up in here. I have the description there. Fast feature point detector detects corners in an image using the features from Accelerated Segment Test, FASD, Algorithm. Fast identifies all interesting features in an image and selects features so that they're consistently detected. Harris corner detector uses the Harris corner detector algorithm to detect corners. The Harris corner detector is a mathematical operator that finds corners in an image. Chita Masi corner detector uses the Chita Masi corner detector algorithm to detect corners. The Chita Masi corner detector is based on the Harris corner detector. But there is a difference in the selection criteria of the corners. Chita Masi works well in cases where the Harris corner detector fails. Okay, so that's the description enough. So essentially we should have a detected type. So we have inputs and the detected type has the image options. The detected type should be Harris and Chita Masi. Yeah, we have that description in the bottom. And we also need a raw data option. And this will be the output. Or should we just display the raw, the original image? No, let's just display the original image. So I have outputs, original, and processed. Oh, come on. GitHub. Yes. And processed image. Right, and that's some description. So now the question is, should this be a flash application or not? Or can it run on the client side? Yeah, we have that. Shit, going to be four straight away. Yeah, which was some sort of way to jump in between in a more like what it says. The decision to implement a feature detection system as a flash application or client side JavaScript application depends on several factors, including the nature of the data you're working with, the complexity of processing, and the user experience you aim to provide. Here's a breakdown of considerations for both approaches. Yeah, many considerations, services processing, privacy, visibility and integration, development consideration, and what's about that. Python, used in flash, has robust libraries for image processing and CV, which can make the implementation of detectors like Fast, Paris, and Sheet and Marseille easier. I think you just sold it to me. Yeah, on the client side there might be limited capabilities. With Lask, yeah, and we already have like what the 8, 9, 10 of them really. Just one should be easier. Creating a flash application for feature detection in images is an excellent choice for several reasons, particularly due to the flash ability to handle server side processing and the ease of use of Python's image processing libraries such as OpenCV. Here's how you can structure your flash application for feature detection. I'm also checking what the images we could use as examples. Basic structure, so we have web interface back end. The back end does get post requests using OpenCV to implement the feature detection. That's great. Saving the images temporarily, that's okay. Maybe, change it later. Like the output, ending road data. What? That actually is interesting. Normally when you're talking about biomedical images, you're thinking your scans, but here's actually time series data. It can also be an image. That's actually could be quite an interesting thing to do because we could just go into one of the other tools and generate quite a few images and see how the image detector deals with them, deals with those. Or should we go imaging examples? Because the algorithm will be, you can use steps to implement feature detection and install flash OpenCV by the headless. I have flash installed. AJAX did use AJAX in one application, but not in a more recent one. Shouldn't matter too much. The AJAX versus, say, micro scans. Just see what it says. You will process the image quite differently because you'll probably look at the different features. Perhaps if you use Microsoft data characteristics, volumetric, image processing, complexity of data. Yes, the data is more complex. I don't think I have 3D images anyway that would be... I could try and get some 3D micro CT. But at first stage it probably just couldn't be 2D images. What's the final outcome? Is it just using it, for example, in one of the diagnostics? What could be the setting of research, development and engineering context? This will dictate the precision speed via one like many for educational purposes of the mind. Speed is important. Speed... Where do we get the images from? Is important. Data, HTML. Let's do the structure of application files. And all this. You don't need to spell correctly. To make people understand. Basically the main page. Result display. Okay, so put in a single page. A single page panel. Please. For a single page panel. Your application. Letting JS images. That's all the root definitions. It's a bit odd. Can I have a standard... What's the last application that we have made this month? Do it in the same way. IG Spectrum Chandy Noise. IG Spectrum Chandy Noise. And this one here. Like this structure. And this will be the images. And we'll see where we get the images from in a second. So the app is called a feature extraction. It will have an app. Requirements read me. Static style. This one. What's the standard name? Or main JavaScript. And by its index HTML. Data. This will be images. It's a script. It's not for EEG. And this will be image of one. Let's do this. Okay, I don't know why I just rearranged the thing. Explaining each file. I didn't ask you to. That's okay. Can we stop generating that? Okay. And have those. Let's go next. Generate only circles. Update of HTML. Well, just HTML. And this is the first time generating HTML. So, why do we have an upload? Okay. Yeah, we're just generating the templates. For the files. JavaScript file. While it's generating. Let's do the folders quickly. Or should we just copy them? Well, this definitely goes into a folder called other could be done. Actually need this. We have the info file. Copy those quickly. I have requirements. We will need to regenerate those. Obviously, I can delete that. It's JavaScript. Yeah, we're using the DOM content loaded. That looks good. A file. Can straight away correct this into the new application name. Generated some Python code. Just template. Right, we're really generating the HTML. Okay, yes, we want to separate CSS. And probably after CSS, we'll have to update the HTML again. Obviously, we're not ready to deploy with. We just started, literally. With this application. Don't have much. Okay, where is the CSS? The CSS. All the body h1, h2 should be commented out. Because we might actually leave it for now with the HTML template. Just checking that it doesn't have any styling or JavaScript in it. We would like all those to be separate. It's a bit messy. It's a bit messy. Report title. Obviously, we need to change the title. Okay, I don't like this already. Because I said the style CSS is not in CSS folder. It's actually in the static folder, whatever. Well, yeah. Content ID, form, input file image. We're using a plotly JavaScript domain. I do this with the local server quickly. Choose file extract features. This description is wrong. A feature extraction from images. And all this description can go. A format document. Let's see what's what. Okay, HTML, a JavaScript. Let's go for JavaScript. A JavaScript can be renamed. Main JavaScript. It can get rid of all that. Extraction, explanation. Need some images and data. At least one. If we are doing EEG segment. We could actually, this slider. It's not great. This is actual, an actual seizure. How about we do that as an image instead of like an export. E download plot as PNG done. Let's pop this. Let's call it seizure. One PNG popped into data. Right. Should be a file. Should just get rid of all the labels maybe later. More prompts to use. In Zen. We haven't done the last SQL file. Let's go with it for now. Structure, colors. Yeah, we'll leave the colors for now. Example the images in this file. Okay, HTML. Where do we edit? It looks like again. Choose file. Definitely. Okay, and the files. All the image files are in this folder. Choose first default. So this actually does not matter. Just pop another image in there. What can we use? We actually had these images for the wavelet image compression. We could use some of those. Use some of those images. Sorry for the ads. Which one's good? Yeah, the first one. Image group as wavelet. Press wavelet. Static images. Right, so this should be in static images. First one, MRI brain. SA02. Yes, this one. Static images. Okay, let's rename this into images. And put it in this static folder. Name this 01 seizure. Yeah, so it's just on top of the file. Copy path. Relative path. Static images seizure 01 seizure. Whatever it's called. All the image files are in static images folder. And choose first file by default. You can get rid of that. Add a drop-down menu to select the image file. Yes, for the Tector type. We wanted to do that. Yeah, that's GitHub co-pilot. And those are good. Why are we using web browsing? Just using the standard GPT-4 default. The out files that are not images, if necessary. Yeah, we don't really have to do it. We won't know at this stage. Change our HTML. Rename the template. Yeah, we don't need that secure file in what? Because the images will be just on the server. Get rid of that image folder. That's right. We don't need to do that. Extracts visual file path. That's not a bad suggestion. Yeah, we don't want a dummy extraction function. We want a real extraction function. Yeah, let's update this item code. HTML. Next. Doesn't mean we don't need Blotly. Deep Blotly chart. Format document. Clean it up. So we have the file. Choose an image. Choose the data. And why is it not populated? Why is it something that is flicking? Making claims. It's too... How do we do that? The question is that if it's rewriting the whole and Flask App would just be it. Take the type. It's debugging. It's true. Yeah, it's still debugging. If you have snippet. Let's create an index HTML template for the dropdown. So HTML. Bum, bum, bum, bum, bum. Scripts. Yeah, we don't have much time. Get something running. Weekly data. Drop down menu. That's the same image. Yep. Yeah. Option. So why is it the same? I don't get it. Why is this not populated? Images study images. And the index HTML. I'm going in circles here. This is just the same. Don't get it. All right. I know what the problem is. Because I'm not running it as a Flask application. I have to run it as Flask. That helps, doesn't it? File, not file. Static images. Static images. Static images. The file is not found. I'll search in the folder. Relative file. Quickly fix this for me. So we can actually move on to the actual development. This is the whole thing I suggest for me to replace. Right. Okay, now it's working. Okay. We are in business. Sure. Images displayed. I'm using Copilot all the time. I actually had to change its settings as well. Copilot. No, I'm using it all the time. I didn't realize it had quite a few settings. Settings that I... Yeah, I think it was actually set up for developers, where you're doing everything myself. I'm not a developer. So, now it actually should be better. I haven't checked it yet. Make sure images displayed in the HTML Flask. So I can try them both side by side. I would actually love to ditch CHBD. Well, because it's more expensive. I don't really need it. If I knew how to use GitHub Copilot properly. So, for example, yeah, I'll try them both. Let's say it's directing us to change the HTML because we meant to have these two images original and processed. So we have the drop-down menu, drop-down menu right here. A place original. And process images side by side. Yeah, that was... Yeah, now it's thinking. I need to generate the whole code. Give me two options. Yeah, like the second one. Accept for tab. Save. Didn't I say side by side? Definitely not side by side. And original image... And, well, can it see? Script HTML. Make sure the images are displayed in the HTML file. Yeah, so this is a much better example. Place it together. Oops. Right. So, yeah, you can tell GPT-3 is actually having context for the whole thing. I'm surprised. GitHub Copilot has access to all my files and everything. It's literally in the development suite, but when it's giving me advice, it's giving me generic stuff, whereas GPT gave me the actual file with the right folder, and it actually works. So, yeah. I'm not sure about... I tried to use one available for students with my university email after, so it was a bit disappointing. It's still using 3.5. Being with 3.5 is... Yeah, I think it's a problem that people use it and then they go get disappointed from it. The main reason I'm able to use, and I don't know why it's in web browsing, is the fact I can start fresh for now, do it quickly, because I'm running out of time. But if I'm in default GPT-4, the main reason I'm able to use it is because it can store the whole context for all the files. So, if I have all these 1, 2, 3, 4, 5, 6 files, if I pop each one of them, I'll go in for last. The order is important. Index HTML first. So, if I start from scratch, the good thing about it, is it the right file? Yeah, and I'm just telling it to read the code so it doesn't generate anything else, and that's because it then stores everything in its memory somehow. You can use these lines to work quite well. And I pop this again. So, it's actually... Yeah, it says, yeah, great, the code's provided. So, it has it in memory. It read the flash application. That's great. Stasis, as I can sort it out later as well. JavaScript is most important. We use the same line. Confirm, you read it, and do not generate anything, any code or anything else. Copy all... Yeah, so GPT-4 is able to keep the context of all the different files. So, we're talking... We're talking 100 lines in HTML, 30 lines in lasks of body in JavaScript. And now, I can actually ask it questions. I'll go into my application. Yes, that HTML, that GitHub copy I generated, didn't actually work, and I'm actually trying to find it and get rid of it. This one here. It gave me very generic stuff. Yeah, I just don't know how to use it. The whole point for me is that I'm able to... Where was I? Yeah, read the JavaScript. It gave me some error or something. Let's read this quickly. It sets up a client-side script to handle a form submission by sending the form data asynchronously to a server endpoint upload using fetch. The server is expected to respond with JSON data that either contains charred data for POTLi if a feature extraction is successful or an error message if it fails. Upon receiving a successful response, it calls renderPOTLi chart to create a chart using POTLi.js. I wasn't actually listening to what it said. We're generating another prompt. We don't have to worry about spelling. Just pop it in as is. And it's giving me... Yeah, it has the context. So that's the stuff that I already have. And then it's telling me what I need to change. And it looks correct. I don't have the processed image yet. Hopefully, we'll have it soon enough. Let's pop this scene. Yeah, I don't have the processed image. That's a problem. And it can do processing for me because it had the Python code as well. Sorry, I went in the wrong... Okay, I'll continue this later. See you in a bit.