 So there is this fuzzy logic tool application, there is already a similar example running on the website. Go check it out if you haven't done so already. It's over here. You can play with it. That's a human alone and has no noise options. So currently we are adding fuzzy logic features to it and it looks like this. You can add the noise level to the ECG waveforms. There are some preset waveforms generated in JavaScript. Just using basic math functions. You can turn the noise all the way down. You have the human score, so you can click on it yourself. There is instructions at the bottom. Go check it out on the website. You can play it. This one is a normal ECG waveform. So when I do not click on it, I get a reward in the shape of a points. Can't give you any other rewards. So that was a hit because that was abnormal ECG. This one also missing a bit. So when I click on it, I get a hit and 10 extra points. Same with this one. Not sure. It's the same one. I think there is a 60-40 ratio between abnormal to normal. That's a normal one. So I do not click on it and I get another 10 points. Click on it as well. Every 50 points, the speed will go up. And on the right side here, this is the new thing is a machine learning what our fuzzy logic algorithm is doing the same task. So it's not cheating. It doesn't have the labels. As I said, the waveforms are being generated in JavaScript. The fuzzy logic is actually running in the back-end Python code. So it doesn't have the labels. It's only getting the raw data in the same way as you see it on the screen. It gets the numbers for the waveform raw values. It's currently not doing very well, as you can see. That's because we just changed that earlier today. We removed one of the features that we were looking at. So we're looking at frequency, amplitude, and peaks, positive and negative peaks. And the abnormality score down the bottom here is the output of the fuzzy logic system. We decided to get rid of frequency. Well, mainly there's two reasons. So first is that it wasn't actually doing much. Second, we only look at a single PQRST complex. So there's no frequencies. So there's no, for example, peak to, what was it, the R to R distance for continuous waveforms. We're looking at each complex, QRS complex individually. So frequency didn't make sense. We are still looking at amplitude, but we give it a low priority, mainly looking at positive and negative peaks. And now we also added a R peak sharpness value. But we're still working on that. So originally, we just had normal and abnormal values. And we had just arbitrary, assuming it's normalized between zero and one, and we take below 0.5 and above 0.5. But now that didn't work quite well. I'll go into details in a bit here. These live streams, it's all about the details. So beware. So now we're adding instead of just two membership functions, we're adding three low, normal and high. We are probably not normalizing it. So the R peak sharpness, it's actually the distance halfway through the peak. So this sharp peak gives an sharpness of 26. And there is another example of not so sharp R peak that should give a larger number there. And it's not coming yet, this one. And that didn't work. Okay, so, and obviously we've noise, the whole idea is that with noise things should become worse. So then even if the algorithm is performing well without noise, then you can add noise to it and see how performance drops. One of the rules for us is not to use any magic numbers. So all these membership functions, if there are two, we just divide that 0.5. If there are three, we do a three way split equals split. So for example, in this case, it's wrong. We'll get the bots doing the coding for us. So we get GPT4 and GitHub co-pilot. GitHub co-pilot normally doesn't have much context. So it's making a lot of mistakes. GPT4 tends to be generally much better for us. Popping the code quickly. This is the waveform generation. There is only a bunch of them. There's four or six. There's essentially two variations for QRS complex, a regular waveform and examples of where one of the other waveforms in the ECGE signal is missing. So no P wave, no S wave, no T wave. And later we might be adding more examples as we go. So that's nothing. You can never increase noise and see how the algorithm is performing performance drops. Or you could add more waveform examples and see how it affects performance as well. We have the game logic. Actually it does all the scoring, measuring the score and everything. So all the measurement of the score is done in front end and all the fuzzy logic done in the back end. So the fuzzy logic, the bot, doesn't have the labels. So they isolated from one another as you would expect. So when you play against the robot, you would not be complaining that the robot is cheating or something. Okay, so this is the end of the Python script. Okay, so we shared all the code with you. We would like to fix a few things. Our peak sharpness is not working well. We get a value of 27. We always get a value of 27 and it doesn't change much. Sometimes it's 25. We need to check how that the function is doing. We'll need to improve on it. Now also the peak sharpness value is not being normalized. Yes, all of a sudden it went all the way up to 85. So there's something wrong with that calculation. Have to check it out, improve the code, go over the membership functions, also the peak sharpness, we want it to be three levels, low, normal, high. Instead of just the normal and abnormal, we'll have to modify the fuzzy logic rules to reflect that change. Okay, I mean, yeah, we could normalize it. That will be easier. I think yeah, you normally normalize between zero and one, don't you? So I think that's reasonable. So we're doing, yeah, so we're applying a basic filter to reduce noise. It's a basic butter worth. Have to check it might not be working correctly. Yeah, in this sense the port is cheating, isn't it? Because it can filter signals. So I'm thinking for the human also, you know, overlay the filtered version of the signal, you might do it later. So it's finding the R peak and then doing the distance between the right and left crossing. So it's going halfway through the peak, should be fine. But then the, yeah, so I'm getting 24 for this sharp R peak and 25, 24. Okay, that's, you know, the noise could add the variability and then I'm getting 97. So it's actually working pretty well. I don't know what, I don't know if it's changing over time, essentially becomes better over time. Is there any over time changes that are being made? I don't think so. You have to check that filter. We'll have to check that filter. Yeah, I have people, there's always someone saying, you know, when you're working on ECG, someone will say, how am I interested in EEG? When you're working EEG, someone say, you know, they want the other one, maybe we should do both at the same time. But then someone will say they want the PPG something, which is also something we are interested to look into. But this is mainly just an example of how fuzzy logic works. Instead of doing like a general, you know, like a textbook example, we actually apply it onto this synthetic data, onto this ECG scoring game that we already have running. Yeah, big sharpness issue. We want to normalize it as well. Just go with the board. It's normally does not do mistakes in the beginning of the development session. So we might return sharpness score. I just comment that out and use the new one. This should be normalized as well. Yeah, the numbers are now very low. 0.03, 0.04. And that's expected to be the same. And then 0.01. Okay, Y on Earth doesn't go. This function calculates the sharpness based on the width of half the maximum height of the RP, which is inversely proportional to the sharpness. If the RP is very pointed, this width will be small, and the sharpness score will be high. Yeah, but it's way too, well, first the numbers generally are way too small. Next, for normalization, you can use minmax scaling based on the observed range of sharpness scores or standardize it based on the mean and standard deviation if the data is roughly normally distributed. Yeah, so it's not normalizing it yet, is it? The membership functions can then be adjusted in the fuzzy logic system to reflect these three levels. You need to define the universe of discourse for the RP sharpness and set appropriate trapezoidal or triangular membership functions based on the normalized scores. Okay, sometimes GPT-04 goes into this error generating response state, and you can actually continue. So you can have a regenerate the response, which I'm happy to do, I'm just not sure it's gonna work. I'll just refresh the page quickly to continue generating, see if it's still on topic. Maybe, maybe not. Yeah, I was saying GPT-04 is better in this case because it's more because we're doing more general stuff. We can edit the last prompt as well. Now it got stuck on us. Yeah, the interface is not great, is it? At least when we refresh the page, it lets us continue with the chat. So I might do that. Okay, 0.4, 0.3. There should be one that they get something else. So we have this one, then we have the 0.1 or something. Yeah, this one. Okay, so with these two examples, it is not behaving as expected or is it? Can you double check? Also, the numbers are very low. How can we fix this? Yeah, I think it's broken. Yeah, that's nothing you cannot do in the GitHub copilot, can you? But then it's thinking forever. Doesn't look promising. Don't let us fix, try and fix the code manually. It's not gonna be fine. Okay, it's really stuck, isn't it? Right, now it's working. Okay, that's right. How can we fix it though? Other morphological features that could be more indicative of sharpness. Yeah, I thought the distance between the two should be fine. What's wrong with that? So it's finding the maximum peak and taking halfway distance. Just a pretty sure this, yeah, can we get rid of these two? Sharpness, cool. But we need that, don't we? Wait a sec. So not do the one over, just do the width, yeah. Right, so we get 97, wait, normally. Yeah, the rules are not working because that 97 is way outside the 0 to 1 range. That's fine. Right, okay, we get 97 and 91, which is not much of a difference as at 24. Yeah, there's something wrong with it, isn't it? Maybe there's something wrong with that filter. Hey, we should display the filtered data. I mean, we could do it for, you know, if the bot having access to filtered data, then the human should have it as well. And yes, that could make your labelling, human labelling more a bit easier, depending if, how well the filter is working. Consider the scale of the measure. If the sharpness values are too low, it might be due to the scale of the waveform data or the calculation method. Ensure that the units of measurement for the waveform data are appropriate and consider scaling or transforming the sharpness values to a more useful range. Okay, if we're scaling, why do we need to transform? Normalization. Normalization is crucial to ensure that the sharpness values are comparable across different waveforms. Normalization should be done based on a known range of sharpness values observed in the data. For example, if sharpness values typically range from 0.01 to 1, you can normalize these values to a range of 0 to 100 for better interpretation and use in the fuzzy logic system. Well, ideally, we would not even need to normalize. Adjust fuzzy logic membership functions. Once you have a proper range of sharpness values, you can adjust the membership functions in your fuzzy logic system to affect three levels, low, normal, and high. Make sure that the universe of discourse for the R-peak sharpness includes the entire range of observed values after normalization. That's fine. Debugging. Implement debugging statements in your code to print out intermediate values used in the calculation of R-peak sharpness. This can help identify where the calculation might be going awry. Guess we have some suggestions. Remember, these are conceptual sketches and will need to be adjusted based on your specific data and the range of values you observe. The key is to ensure that the measure of sharpness is sensitive enough to reflect the actual variation in the sharpness of R-peaks across different ECG waveforms. We're using the epsilon 4. Sounds a bit... It's just a constant number, is it? No, I'm not talking about the message. I'm talking about the actual sharpness score to avoid division by 0. I don't think that's the problem. All just max. We have a scaling factor. No, we don't want to do that. Get rid of that. Normalization. Yeah, for this, we need to know the mean and max values. No, we don't want to do it. Okay, the first suggestion wouldn't work. We don't have an issue of dividing by 0 as it is. So that's fine. A normalization. I don't want to do specific normalization. It isn't doing 1 over the value is, in a sense, normalization. The other main question is why the numbers do not make sense? Or maybe they do. Okay, in the normalization function, the sharpness values have to be saved somewhere, is that correct? So do we need to use the back end for storing these numbers, do we? Well, because it's simulating, eventually, a situation where you just connect the device. So like in an ideal sense, you'll be just taking your ECG recording device and connecting it online, and so it uploads the data, streams the data onto, well, obviously, bionicchaos.com, not any other website. No, I mean, that number actually works now that that's suggesting a major change. I'm starting to defend the previous algorithm. Yes, so with these values, what should we be scaling? Now, I mean, maybe we can do both later, but just continuing with the same, with the previous approach of the distance at the half height of the peak, looking at the current examples that I added, what should the scaling be like? Yeah, we don't want to have a magic number for the scaling. Now we don't want the 2000, we potentially can do 10. Let's just do 10 over there, right, 0.38, those numbers are more like it, 0.4, 0.41, okay, that's fine, 0.4, 0.1, yeah, 0.4 and 0.1, would these do the trick? Okay, we just changed the code to this, to have 10 instead of 1. Now the values seem to be correct. I get a 0.4 for a normal waveform and 0.1 for a wider R peak. Can you adjust the membership functions accordingly? We have it suggesting this simpler, a calculate R peak sharpness, we try it quickly, 0.4, 0.1, it's actually, it's cleaner, 0.3, yeah, we'll see how it works with more waveforms. So we get a 0.1 for this and 0.4 for normal one. That should be fine. Now, for the fuzzy logic membership functions, if you're getting sharpness values around 0.4 for normal waveforms and 0.1 for wider R peaks, you need to define your membership functions to reflect this. Here's a possible setup for your membership functions. Okay, we have those membership functions, oops, no, this bit sharpness, it's not called that, but okay. In this setup, the low category captures the wider R peaks, medium captures the normal range, and high captures the very sharp R peaks. The triangular membership functions overlap, meaning that a sharpness score could belong to both low and medium categories to different degrees, depending on its value. This is typical in fuzzy logic, where crisp distinctions between categories are not made, and elements can partially belong to multiple categories. Don't like that name, and we like this to be normal. Make sure that you define these membership functions within the context of your fuzzy logic system, and then adjust your fuzzy rules accordingly. Okay, this is better, should have a rule using this thing, rule seven, the eye all low, can it do this control eye all low? It's actually doing that, send the OR operator abnormality likely, oops, some problem, some problem with the rules, I need a GPT to have all the code anyway, no, no, no, no, that error before didn't make sense, now it's gone away, what, it's weird, we have low normal high, yeah, check those rules, yeah, this one, what do we do abnormal, that's wrong, this is wrong as well, yeah, because we changed the code, let's see if, the error message you're receiving indicates that there's an inconsistency in the definition of membership functions for our underscore peak underscore sharpness. In the code, you're trying to access a membership function named abnormal, which does not exist. To create this, you should ensure that the membership functions you reference in your rules are the same as those you defined. As per your request, you've changed the scaling factor for the sharpness calculation to 10, which seems appropriate. Now, let's make sure the fuzzy logic rules align with the membership functions you defined, low, normal, and eye. Here's how you should define the fuzzy logic rules based on the membership functions. Okay, so we have this, and what is it, seven rules now, rule one, was if peaks too few or too many, and the amplitude is not high, could be just amplitude is low, which will be different. The abnormality is likely too few, normal likely, yeah, those two rules probably could be combined, couldn't they? Okay, normal, normal is unlikely, rule five, let's read this. In the rules, we're now using the correct labels for the R underscore peak underscore sharpness membership functions. We use high and low to denote abnormal sharpness, assuming that both very high and very low sharpness values could indicate an abnormality. Adjust these conditions based on what constitutes an abnormality in the context of your application. Make sure that your fuzzy variable abnormality also has the correct membership functions defined, unlikely, possible, likely, and they're being used consistently in your rules. After making these changes, your fuzzy logic system should work without throwing errors related to undefined membership functions. Remember to test your system with a variety of input values to ensure that the rules and membership functions are behaving as expected. A couple of things aren't rules four and five are essentially the same, please confirm so I can get rid of one of them. Now the rule six that you suggested is the same as rule seven that I already had, please confirm. Yep. Both rules are indeed checking if both positive and negative peaks are within the normal range. If the intent behind these rules is the same, which is to determine that the waveform is unlikely to be abnormal when both peaks are normal, then one of these rules is redundant. You can safely remove one of them. Here's the combined rule. Rule two checks if there are too few negative peaks and not a normal amount of positive peaks, then it's likely to be abnormal. Rule seven, the old one, checks if the R underscore peak underscore sharpness is either too high or too low, then it's likely to be abnormal. They're not the same because they check different conditions. Rule two is about the number of peaks, while rule seven is about sharpness. However, if rule seven is intended to flag any sharpness extremes as abnormal, it is similar to the new rule six, which uses the updated membership function names, high and low. You should keep rule seven if it contributes to the decision making in a way that is distinct from the new rule six. If they effectively represent the same logic with the updated membership function names, then you can replace rule seven with rule six. Ensure that the rules you set up are capturing the distinct scenarios you need to evaluate in your fuzzy logic system. Each rule should correspond to a unique combination of conditions that you consider important for the analysis. The error message you're receiving indicates that there's an inconsistency in the definition of membership functions for R underscore peak underscore sharpness. In the code, you're trying to access a membership function named abnormal, which does not exist. This error would be an easy fix. You always say that and then spin an hour. Based on the area you're encountering, it seems that there may be a set of input values for which none of the rules are firing. This can happen when the input values do not sufficiently activate the term set in the antecedents of any rule. In fuzzy logic, it's important to cover the entire input space with your rules to ensure that any possible input will result in some degree of activation. Review the membership functions. Make sure they cover the entire range of possible inputs. There should be no gaps between the ranges covered by the membership functions. Review the rules. Ensure that all conceivable combinations of inputs will trigger at least one rule. If there are any gaps in coverage, add rules or just existing ones to fill them. Default rule. Consider adding a default rule that catches any input combinations not covered by the other rules. This rule would typically have a lower confidence level and would serve as a fallback. Here's an example of how you might implement a default rule. Considering the updated rules and membership functions, let's ensure they're correctly set up. After making these changes, test your system thoroughly with a variety of input values to confirm that every possible input scenario results in some rule activation and that the system no longer produces the sparse system error. Hash ensure that the input ranges for the fuzzy variables cover all possible values. Amplitude. AutoNF3 hash this automatically creates low, medium, high membership functions. Okay, I didn't know you can do that. Enable to resolve rule execution order. There's two more rules that depend on each other. So if I... that's automatically generating rules. Automatically populate the universe with membership functions. Number of membership functions to create the fully automated use supply 3, 5 or 7. Any number may be generated if you provide an appropriate size list of names. Quality, quantitative variable. That's essentially a default for impedance circles that if the error... what's up with that window? It's not great. The error message you're encountering, unable to resolve rule execution order, suggests there's a circular dependency in your rule definitions. This can happen if the antecedents of some rules are consequence of others in a way that creates a loop, which makes it impossible to determine the order in which the rules should be evaluated. Okay, the sharpness now works. Okay, so we can get rid of all the rotate type functions. Check for circular dependencies. Review your rules to ensure that there's a clear cause and effect direction. Each rule should have a distinct antecedent, cause and consequent effect with no loop back to the cause. Rule graph visualization. If your fuzzy system library supports it, you can generate a graph of rule dependencies. This visual representation can help you identify where the circular dependency might be. Simplify rules. Simplify the rules to break dependencies. You may need to refactor some rules to prevent anyone rules outcome from directly influencing its own activation in a loop. Simplify rules. Simplify the rules to break dependencies. You may need to refactor some rules to prevent anyone rules outcome from directly influencing its own activation in a loop. Order of rules. Define rules in an order that ensures that the output of one rule is not immediately used as input to a rule that would affect the first rule's input. Let's look at the rules you've defined and try to identify any potential circular dependencies. From the rule definitions, it's not immediately clear where the circular dependency might be. All rules have distinct antecedents and consequence. However, the default underscore rule is using the negation of the abnormality consequent as its antecedent, which may create a circular dependency if it affects the input star the rules in a way that wasn't intended. To resolve this, you can modify the default rule to only consider the inputs rather than the outputs of other rules. This rule is very silly. Suggested by the bot, it was a hallucination, was it? Positive peak if not normal. Positive peak is not normal and the answer is not medium. Abnormality is possible. It sounds more likely. Big sharpness. Subnormal doesn't exist. We don't actually use normal or abnormal. Should get rid of it. No, we are using it for the decision. But not for this. Yeah, we have low normal high. That's right. GitHub co-pilot can do something useful. It's great. Not actually running the app. It's a good idea to run it. Okay, now it's working again. Anyone wants to test it for me? It's not correct. Normal. That's cool. No, it's not correct. False enemies for that. It's getting better over time or something. I don't think so. It should be normal. It's not too great. Check the rules again. Please do let me know what are any questions about videos.com. Any comments are more than welcome. Yeah, don't particularly care for likes or subscriptions or clicks or whatever. I do want comments. Your feedback would be greatly appreciated. Yes, it will be great to have a better way to visualize the fuzzy logic detection. How the fuzzy logic algorithm making decisions. That would be awesome. Now, can you wrap up this session if you want to be funny? Go for it. But not too funny. Also, you can touch on the following. Yes, we currently working on ECG today. But if I'm more interested in EEG, we were using fuzzy logic or trying to use fuzzy logic before to detect the seizure in a clean EEG. But we're also curious about using it to ignore noise in real time. Yeah, real time should be much faster than neural networks anyway. So real time is actually a good use case for fuzzy logic. I would think. Let me know what you think and maybe a comparison to a component analysis method, which yes, in terms of what assumptions are required for each and then finding the pattern against the specific course of action. Not sure what that means. But yeah, maybe the pot has some ideas. Give some overall stuff as to how to improve what we did today. So we will do this tomorrow. And any overall statements as to how this tool might be used. Wrapping up our session in a not too funny but lighthearted biomedical engineer manner. So we've been diving deep into the ECG squiggles today, haven't we? It's like trying to interpret an abstract art piece where every line could mean normal or code red. But seriously, the fuzzy logic system we're tweaking is like that friend who doesn't overreact to every little thing. It's calm, it's collected, and it only raises an alarm when it's truly warranted a valuable trait in both friends and algorithms. If you ask me now, if you're interested from the heartbeats to brainwaves, moving over to EEG, that's a whole new ballgame. Asia detection in EEG data is like finding a needle in a haystack, especially with all the noise. But you're right, fuzzy logic could shine here since it's all about handling uncertainty and imprecision, perfect for stormy seas of EEG data. And speed. Fuzzy logic is Usain Bolt compared to neural networks, which can be more of a marathon runner. Comparing fuzzy logic to principal component analysis, PCA, it's like comparing an improvising jazz musician, fuzzy logic, with a classically trained pianist, PCA. Fuzzy logic rolls with the punches, making decisions on the fly based on the vibe of the data. PCA, on the other hand, is all about rehearsal, finding the perfect linear combination of variables that best explains the data, given its assumptions about linearity and statistical distribution. As for finding the button against this specific course of action, let's assume it means challenging the status quo. In that case, fuzzy logic doesn't just push the button, it writes a whole new instruction manual. Improvements. We're on the right track with scaling and refining the membership functions. Tomorrow, we can look at tweaking the rules further, maybe consider other input variables, or bring in new data to fine-tune the system. In terms of broader applications, this fuzzy system can potentially be a game changer for real-time monitoring in clinical settings or wearable tech, providing quick, interpretable feedback without the computational heft of deep learning. It's about making smart, fast decisions, which, let's face it, is a universal good, whether in medicine or choosing what to binge watch next. All right, let's call it a day. And remember, in the world of biomedical engineering, the only constant is the pulsing line of a new challenge. Keep your senses tuned, and let's crack it tomorrow. Okay, yeah, the pot could be a much better YouTube channel host than myself. Let me know if you want to co-host a YouTube channel. I'll see you tomorrow. Bye.