 And there's two papers on the EEG portable systems for event-related brain potential. So it's a display, it's an EEG response to a certain stimulus. Could be vision, sound, anything, really. But this paper, we would not have time to read it all, especially because it kept coming up. There's another paper, similar. Normally, just when looking at papers, we look at some of the images. Yeah, not a great image there. That's the study paradigm, I assume. Not an easy image to follow. And those should have been in different colors, come on. But yeah, even without the different colors, you can supposedly see the difference in response when stimulus is present. And this is amplitude microvolts, amplitude EEG. So you can see it's bugger all. In this case, like 5, a microvolt, a change of plus minus 6, 7 microvolts. You can also see interestingly that this signal is not detrended. There is some DC in it, especially in this third example. You know what the difference between the three examples. So this article is Creative Commons Open Source. You can download PDF. We just work on it to register. No, PDF is just downloaded. Now GPT-4, can you read the whole article and summarize? Mainly after information regards the data set used for this article, is it publicly available? Give me the link to the article and why you have it already. Did you read the whole article? Is that data set used in this study publicly available? It's reading it again. So obviously this would be different from uploading the actual text into GPT. Just trying to work with the PDF. The article doesn't specify the data set used in publicly available. It came still. How was the data collected? What are multiple devices compared? In the study, EEG data were collected from 60 participants using both the Muse EEG system and the standard EEG system to compare their performance. Multiple devices were indeed compared, the portable Muse system and the laboratory-grade EEG system. The comparison focused on the accuracy and reliability of the MUSC system in capturing specific ERP components, assessing its suitability for cognitive and perceptual research. For more details on the data collection methods and device comparison, you can refer to the full article here. Like you to read the whole article and not refer to the link anymore? Could you explain the figures? Particularly figures 2, 2, 4. I'm pretty sure I didn't read the whole article. I don't know if it's reading it now as we go. It keeps referring to the... The article compares the Muse EEG system with a standard EEG setup across figures 2 to 4. These figures present conditional waveforms and difference waveforms for different tasks and analysis methods, showcasing the performance of the MUSC system compared to the standard setup. They visually represent the reliability and quality of the MUSC system in capturing ERP components. The comparison is made across different conditions and tasks, indicating the MUSC system's effectiveness in a variety of scenarios. For a detailed view and understanding of these figures, please refer to the full article here. Again, without referencing the article in the link, online link, could you explain the details of figure 2? Figure 2 in the article presents the ERP waveforms recorded from the MUSC system and the standard EEG system. It compares these waveforms for different components and conditions, illustrating the similarities and differences in data captured by both systems. This figure is crucial for understanding the efficacy of the MUSC system in accurately recording ERP components compared to the traditional EEG setup. It's a really bad figure to understand. Should have been labels, conditional waveforms, standard analysis, electric PZ. The top is the electric PZ, middle, reduced analysis, put the electric TP9, TP10, and bottom is the MUSC. Obviously this was cherry-picked as well. Where can I find the raw data for this study? The raw data for the study presented in the article is not specified as being publicly available within the article itself. Should I read papers that do not publish the raw data? Papers without raw data can still offer valuable insights, theories, and methodologies. However, if reproducibility and data analysis are crucial for your work, access to raw data might be a determining factor. The decision ultimately depends on your research objectives and the importance of data transparency in your field of study. Well, it's also that the figures in these papers are not great. I would have liked to reproduce them in a different way. So they actually made clearer. Can you explain figure five? Figure five in the article presents the mean amplitudes of the N200, P300, and reward positivity ERP components across three different analyses, standard, reduced, and MUSC. The figure includes 95% confidence intervals for each component. This visual comparison helps in understanding the performance of the MUSC system in detecting these ERP components relative to the standard EEG system. So while the reduced and MUSC conditions are similar, there seems to be a significant difference between the standard EEG recording, both in N200 and P300. Yes, figure five indicates significant differences between the standard EEG and the reduced MUSC conditions in capturing N200 and P300 components. This highlights the variations in data quality and accuracy between the systems, particularly in detecting specific ERP components. Such differences are important for evaluating the MUSC system's suitability for various research applications. How about reward positivity study? In the reward positivity aspect of the study, figure five shows a comparison of the MUSC and standard EEG systems. The figure likely indicates the differences in capturing the reward positivity component, which is crucial for understanding the effectiveness and limitations of the MUSC system in this specific aspect of ERP research. This data helps in assessing the MUSC system's overall utility for various ERP components, including reward positivity. Can you extract the data from the figure? Anyway, I have no idea why. So this is normal N200. This is the figure. Can you extract data from it? Yes, it's actually using PLO library. It's using OCR to extract text. It didn't do a great job extracting the text. I don't know how does it know it's successful or not. I mean, it's extracted something. The OCR process data is used to extract the data from the figure. The OCR process data is used to extract the data from the figure. The OCR process data is used to extract data from the figure. The OCR process did not successfully extract the detailed data from the figure to accurately extract the data from the bar graphs for N200, P300, and reward positivity, who would typically use image processing techniques to analyze the pixel values and convert them into numerical data. However, this process can be complex and may not always yield precise results due to variations in image quality and graph formatting. Proceed, I can attempt to manually estimate the values from the bar graphs based on their relative positions and the scale provided in the figure. Would you like me to do that? So now to the numpy, ring the image, convert it to grayscale, thresholding, invert, find contours. Yeah, there would be a problem with those contours. There's the background bit over there. Anyway, it's not a great graph. Filter contours, it's not only fine, it's in the green. The contour detection approach did not yield the desired outcome for data extraction. Instead, we can employ a more direct method by estimating the mean amplitude values from the bar plots based on the y-axis scale. I will proceed with this manual estimation approach to approximate the values for each component. N200, P300, reward positivity from the bar plots provided. Let's analyze the image to estimate these values. Right, so now it's doing another estimate, showing the figure. So I estimated the values for N200, standard mean minus 5, lower, that's wrong. Yeah, I think it's really struggling with this image. Based on the manual estimation from the figure provided, here are the approximate values for each ERP component across the three conditions. N200 component, standard, mean approximately minus 5 microvolts, C range approximately minus 10 to 0 microvolts. Reduced, mean approximately minus 5 microvolts, C range approximately minus 10 to 0 microvolts. News, mean approximately minus 5 microvolts, C range approximately minus 10 to 0 microvolts. It's obviously wrong. The values are wrong. I don't know why this figure is like that. It seems like to begin with a very bad figure. But I wonder if there is a reasoning behind it. Was it meant to confuse OCR and image detection? It definitely confuses the human, but yeah. The contour detection was successful in identifying the bar shapes from the figure. The accurate data extraction. We need to measure the heights of the bars and the pixel distance between the top and bottom of each bar and then converting these pixel measurements to corresponding voltage values using the scale of the Y-ACC. However, the current setup of the code detects all contours, not just the bars, and doesn't provide the data extraction functionality. To extract the data, we would need a more complex analysis, which includes isolating each bar, measuring it, and mapping it to Y-ACC's scale. That's similar to the original code. What is it doing now? Right, so it extracted some numbers. Why is it so large? Obviously, the absolute value is wrong. I don't wonder if it's still correct in terms of the ratio between the dots and look like. Yeah, that's wrong. There was an error in the code you didn't correct indexing. The image is actual size and the Y-ACC's pixel positions need to be accurately determined to match the image dimensions. I will correct the code and try again. Well, it doesn't really matter if it's not being able to return the absolute value because it can't read those little numbers there, which again should have been just bigger. The Y-ACC's numbers should still give you the relative size it got in there again. It appears there was another indexing issue due to the X-coordinate exceeding the image width. It's likely because the estimated X-positions for the center of the bars are not aligned with the actual image's coordinates. To proceed correctly, we need to accurately determine the X-coordinates within the bounds of the image dimension. I will make the necessary adjustments and try again. No, it's not going to succeed doing this. The main reason is that the original image is really... The original figure is really bad. Should I try this one? Probably not. The colors should have been different. Yeah, just bad, bad, bad. What year was it from? Right, this was from 2017. I can try and have a more recent paper that was comparing EEG, again an oddball task, comparing EEG from this headband. And the 1020 EEG cap. The math does look just synthetic. Right, this is different colors, which is good. Again, it's just five seconds. We need more data to find if the raw data is available. This could be turned into a graph fairly easily. I guess it's over here. Yeah, so this is in different bands, comparing the two devices. One good thing about this first study is that they did a comparison to the standard, not just a reduced number of lectures. I'm not sure what they're doing. I assume they just do a reduced number of lectures comparison. The GPT might have better luck with this. It's actually comparison to different devices. There's no chart for it. GPT just make study new shit to this table first. You can manage it. It's the prices, et cetera. Surprise this paper. I haven't turned this into a chart. Like an infographic, whatever. Let me get the paper. Yeah, we got the paper. Let's do a quick, I'm still loading, quick summary. They obviously there's more information on the muse. So you know if the paper is biased or just what's available. Still generating stuff? Or reading document? Why does it take so long? Network care. And attention to ask. Study concludes that MINDEEP is a promising EEG recording device with potential applications in audiology, cognitive neuroscience, and daily requirements. Regarding the comparison of portable EEG devices, MINDEEP stands out with its high resolution, compatibility with both dry and wet electrodes, and Wi-Fi connectivity for data transmission. It also offers features like impedance detection, multi-device synchronization, and event marking, making it versatile for various applications. The study highlights MINDEEP's advantages, including its ability to provide raw data access, and its practical settings. The EEG data sets generated for study described in the document are available upon request to a corresponding author. This suggests that while the data is not directly publicly available, it can be accessed by contacting the author responsible for research. Yeah, that's the graph for the table there, isn't this? No, it's not. How to summarize this table visually. We got a timeout first for today. We might need a break in the past. I was giving us like a one hour timeout this time. It's only, what's it now? Yeah, seven minutes. We can continue in seven minutes. I don't know if we will keep getting those, but yes, it's actually more bombarding, more don't get a timeout. We'll learn more about the timeout. I've already submitted this in the past. Yeah, usage cap request. Okay, so the summary for this two papers is there. Yeah, this is not a great paper. At least the way it's published, I haven't actually read the whole thing, but yes, say this graph is just the chart itself. The graphics of it are really bad. You can, well, for example, chipt4 could not extract anything out of it. So close this one. The other paper, yeah, the graphs are better. It's a newer paper as well. But this is not the whole data set with both papers. The data is not publicly available. There's only what's inside the tables within the paper itself. The first one didn't have any papers. Obviously, the reference is very important. It suggests that the reference is on the device itself. I assume the electrodes are on the forehead, so they're actually not in the same position as the 1010 EEG montage. Yes, there's lots of questions about this one. Can the raw data is not available? You'd expect the data from all the electrodes in a raw shape and form without processing anything. Yeah, I'm not sure why I include a figure like that. It could be just mentioned in text. Yeah, I don't think this one is of much use as well. Yeah, this is handy. You want those actual recordings. Rest in state. It's E O I. It's I, open, I closes it. So A is I's close and B is I's open. Instead of I motion or a blink. So essentially this whatever that is, I assume it's an I movement that the mind, the portable device recording was a double the I movement, the amplitude which if you consider it to be your artifact it's not a great thing. So here as well, the question about these things is sometimes the amplitude is greater but the question is what out of it is actual EEG. So this amplitude fluctuations could be non-EEG. It could be slight movement or things like that. It could be good to look at the raw data. I don't like summaries like that. They tend to be somewhat biased in the way it's being processed. Yeah, relative power to what. So it's out of all the power. But how come it's relative power here as well. I get relative, yeah, this relative power. The GPD OCR and the image processing would be able to extract these numbers. Okay, but what's relative power in here? The numbers available in those tables? No. And then there's a comparison table with all these different devices where they weren't actually tested in this study. So how is that relevant? Yeah, we'll spend more time on it if the raw data was available but it's not. Anyway, we've already spent too much time on this and we should be good to continue using 3.4. As we do this quickly we have this to 1.35 now. Now, so didn't tell me the title of the previous chat. So yeah, when you put the NIH it's called NCBI papers you get the access denied by robots. Which is fair enough, but yeah. Am I a robot? How do I know? Right, we now have till 1.35 so the time it came us before was wrong.