 ECG labelling game. We added an option to add noise, you can add body folders like some minimal amount of noise. If you move the slider all the way to the right you get a lot of noise that the QRS complex is barely visible and in this case we expect the bot to start making more mistakes. I wouldn't know which one which one is it. The bot seems to be struggling as well yeah it's creating all these false alarms that's because it keeps clicking on everything so I wouldn't know which one of those waveforms is an actually normal ECG waveform. The abnormality score remains 0.73 so it's labeling everything as abnormal so it keeps essentially clicking on everything. If it was a human doing the task and when there is a heat it clicked correctly on abnormal waveform. When it's a false alarm this means this is a normal waveform that's just how it looks like. Yes a human won't be able to recognize it but the bot is not able to recognize it either so but once we start reducing the noise level let's say put it half way got another false alarm I can barely tell still a I think for a human it will still be above a threshold in terms of being able to detect if this waveform is normal or not. Yeah it keeps making mistakes okay if we reduce noise entirely this would be a normal and it says abnormal. Why hopefully next time it gets the same waveform it gets it correct yeah this is wrong again but the number is reducing it's 0.51 okay it's not doing very well is it? Yeah I would expect this number to go lower over time there's something to do with the thresholds we were seeing something like 0.3, 0.4 last time for a normal ECG but we don't get that anymore it's a bit unusual we can start it over again see how it does yeah we expected 0.51 for a normal ECG to be slightly lower to be well below 0.5 for sure let's start again we have this minimal noise on the ECG waveform get 0.7 for abnormal wave and 0.7 for a normal one so we still get a false alarm for it yeah it's really not doing well with the noise 0.7 yeah keep getting false alarms so essentially labeling normal a waveform as abnormal that's no good I don't know if it will go any lower I assume this is the noise throwing it out throwing the algorithm off I have to check probably without any noise at all yeah we get a very high number for normal ECG so it's labeling it as abnormal okay so we get 0.59 for this one which is obviously abnormal and that's labeling it correctly 0.8 for that one which is missing and negative peak which is fine 0.7 for this one that is missing the first positive peak we get a 0.51 now for a normal waveform but it's not low enough to be labeled correctly by the algorithm I suspect it would be something like the thresholds might not be correcting themselves so they like saturated at something so we popped all this code into GPT-4 let's break it down into two parts individual code file review and the integration yeah I'm more worried about the integration but let's go over it one by one yeah we might later add actually more waveform types those are generated in JavaScript it's probably important to note that the waveforms are only generated in JavaScript where is the fuzzy logic the bot is working in the background so in the back end so the bot cannot possibly cheat because it's only seeing the raw data it doesn't know how the data was generated so it doesn't have the labels or which signal is what currently we still have trouble with this 0.51 value which is too high should be at least below 0.5 for the abnormalities core or the decision to be correctly say normal for a normal ECG wave we can artificially correct it but yeah we don't want magic numbers we want a like a permanent solution for this okay can we do some rapid fire quick responses can you look at the whole code give a quick summary of how the fuzzy logic algorithm is making its decisions could you primarily focus on how a normal ECG waveform is being detected the reason I'm asking is that for a normal ECG waveform I'm getting an abnormality score that seemed to be slightly too high how can we fix this it's actually corrupt the number 0.8 abnormality score for this which is fine it looks almost normal but it doesn't have any negative peaks that's why the score is elevated that's okay this is the abnormality score that I get for a normal ECG waveform that could be an issue with the range of values so the problem is that they don't go low enough are they meant to be between 0 and 1 okay we don't want to be like tweaking them and most people with machine learning algorithms just keep tweaking everything the rules meant to just make sense okay so regards the solution can you actually look at the code provided look at the membership functions and yes suggest how to refine the functions to reduce overlap I thought there should be a consistent level of overlap we would like to make sure the output the what's called the abnormalities score goes all the way to 0 or close to 0 regards the rules can you check if we are currently using any weights I don't think so generally yes we would like some parameters to have less weight so for example it will make sense to have less weight to the amplitude and more weight to the number of positive and negative peaks the frequency should also be lower a weight now regards normalization ranges yes we would like to check give the normalization process for your input variables ensure the global global mix max values are set correctly okay so we actually setting them to nans to begin with to not available this was working better would you suggest changing them there would be some common values that we expect that we could be using instead this seemed to make a lot of difference also could you explain the repeatability so would the abnormalities score be different every time we run the web application now for the threshold and scaling yes I suspect you are suggesting to adjust them manually we could have done so except and the normal ECG abnormalities score is very similar to the value for and one of the abnormal waveforms we would like to fix that now main question the usual question are you going to be generating the code or do you prefer to provide prompts for github co-pilot to use what is it we currently have for frequency the problem with frequency is that we only essentially have two values just down quickly 0.513 0.513 that's just reload the tool so it's faster as well yet that level of noise should not impact the result too much yeah now it's 0.76 and when you remove noise expect this number save this number going lower 0.513 yeah it's that same 0.513 0.775 yeah that's for normal well at least it's behaving as expected I'm not sure this will necessarily help the frequent the problem with the frequency we have three or five I think oops yeah because well it kind of makes sense because it's a single a single waveform a single pq rst complex so there's no much variability in it anyway so I think when we have all the positive and negative peaks I thought this number should go to five was it only happening when there's no noise so this will generate a 323 which is normalized to 0.4 and on steel three might be something wrong in terms of when the noise is added and then being removed um yeah the scaling might be doing something funny because of now it went to five yeah I don't think this will do much because we don't know why it's suggesting that now rule waiting yeah that will be more likely to help so is uh yeah we want to do that for just keep it there it's a rule one this rule gives more weight to peaks count then amplitude and that I expect to actually make a lot of difference let's restart this quickly yeah we see where 0.8 for the first one then 0.1 yes now it's doing much better so that suggestion helped get copilot to explain this quickly condition saying that the number of positive peaks is either too few or too many and the amplitude is not high then the rule is triggered the operator yes we have all meaning that if I have conditions then we have and then we have and operator yeah that's fine has to be true the whole statement to be true the this operator is the logical not operator in the fuzzy logic gates why are we negating that all right we actually changed yeah we changed the rule entirely so we have um originally it was the positive peaks too few or too many then we're also adding and amplitude is too high yeah we could add low as well but this seemed to be working much better let's see interior when we add more all right really did nine misses as it was and now it's misbehaving let's see if we reduce the noise all the way to zero that's labeling this one as a normal we expect the false alarm and the red ones misses and false alarm to remain at what they are 0 and 11 not to go any higher yet the higher this abnormality score the more abnormal the ecg waveform is right so it's doing well and then you can see how by increasing the noise the robot starts making more and more mistakes so hopefully this is a useful tool to learn how fuzzy logic can be used in in biomedical data signal feature detection yeah we get a relatively a very low number at 0.12 or a normal ecg is what we want and that just was achieved by tweaking the rules this is normal you get 0.12 so on the other hand I had like a video telling how you know much better the bot is on the other hand if you have both humans instead of turning it into competition like over here we can give this abnormality score to the human expert to try and help the decision-making so now in addition to looking at the waveform I can also look at the abnormality score that the machine learning algorithm is providing I can make a better decision so if it's still it fit if this abnormality score is slightly elevated then you know I can pay more attention so currently the bot doesn't seem to make any mistakes the 11 misses what they are when we increase the noise level so now the real question is there will be another very interesting thing to do you see actually how human and machine compare when there is noise added to the signal essentially when let's say if we add a little bit of noise so I can tell that's abnormal it's actually harder for me to tell if something is normal see if the machine will start making mistakes yes so the miss it missed one signal yeah it keeps doing misses so I actually have a chance now so yeah interestingly enough when noise is being introduced to the waveform oops made a mistake I'm still able to do the task and the machine starts making mistakes this is an interesting case when now I'm actually weaning weaning the machine when the noise is the elevated that's getting faster as well yeah humans tend to get distracted that doesn't help but the bot is making a lot of mistakes so the algorithm needs to be tweaked yeah so now there's essentially the opposite problem of okay so now changing rule one solve the problem when there is no noise it's working really well however when there is a noise all most abnormal acg waveforms are labeled as normal which is no good and this is the change that was made well I don't know if you mentioned this already but the obvious problem is that when we add the noise the number of positive negative peaks shoots through the roof and obviously that the algorithm doesn't work anymore now I less inclined to do any filtering I want the algorithm to see to actually see the same a data as what the humans see so now the question is I well assuming I'm a human still able to do that ask quite well if there is some added noise to the signal however the machine fails already now the question is how do I do that well the main difference is that I don't measure the I don't see the peaks that were generated by the noisy by the noise added to the signal as the negative or positive a amplitude peaks I think it will make sense to look at when counting peaks to actually look at a certain percentage of the entire waveform for for the peaks to be detected I think we're already doing this in the code can you double check instead of looking at the specific height height of the signal when finding peaks we're doing like a percentage so in this case looking at the 20% of the maximum amplitude for the threshold I think this will be more similar to what a human does yeah the numbers are smaller he's 20% so obviously if we make the noise larger this and negative and positive peaks will go will become larger numbers as well so we just have another question is what's a reasonable number I would say yeah so so yeah this is a bit of a make this numbers like 4% 40% and what will happen is yeah for this level of noise it's correctly finding 4 which should be 3 so it's making mistakes that's 13 what it doesn't actually still finding 4 whereas it should be 3 in this case there are 3 which is correct this should have been one negative one that it's missing found 51 that one that's obviously incorrect step number should be suggesting that noise is too much yeah I was at 13 the original number minimize the noise is fine it's still incorrect if no noise with the original algorithm we get 2 and 1 that's right we get 3 and 1 which is correct instead of this is giving some errors how about looking at the median of the signal and taking percentage of that for the number of negative and positive peaks so essentially we like to look at the baseline find the baseline and then do peaks as a percentage above the baseline try this one out so this one didn't really work just the percentage of the maximum level okay we might just have to finish it another time so I'll say bye bye for now and we'll keep testing don't forget to check out buying cows that go and provide your feedback they're important to us